BACK TO INDEX

Publications of year 2019
Thesis
  1. Carlos Gomez. Reliability in Desktop Cloud Systems. PhD thesis, Universidad de los Andes, 2019. Keyword(s): cloud computing, distributed systems, desktop clouds.
    Abstract:
    Desktop cloud is an opportunistic platform that provides cloud computing services on desktop computers, typically located on a university or business campus. The desktop cloud systems take advantage of the idle resources of computers when their users perform routine activities, or when computers are fully available. A desktop cloud manages these resources to run virtual machines, with their operating systems and applications, without affecting the performance perceived by the users of the computers. Virtual machines allow users of a desktop cloud, typically researchers, to execute their academic or scientific applications at the same time as the processes of the users of desktop computers. Since the infrastructure of a desktop cloud is based on non-dedicated computers, these systems are more susceptible to fail, compared to traditional cloud computing providers. These platforms must face the interruptions and interference caused by the users of desktop computers and their applications. For example, the users of the computers can turn them off or restart them, disconnect them from the network, or execute demanding applications in computational capacity affecting the normal execution of virtual machines. Desktop clouds have been used successfully in the execution of bag-of-tasks type applications, where a problem is solved by dividing it into independent tasks that run in parallel. In this type of applications, since the applications that run are independent, any of them can be executed on another physical machine, if a node on the platform fails. Other applications, with processes that communicate with each other, for example applications based on Message Passing Interface (MPI), are more fragile when facing failures, since a failure in one node can affect the entire system. There are several implementations of desktop clouds or similar systems, such as CernVM, cuCloud, ad hoc cloud computing, GBAC and UnaCloud, our case study. These systems take advantage of idle resources of the participating computers for the execution of virtual machines. Due to the opportunistic access to resources, in general, desktop cloud systems are platforms that offer a best effort delivery service, in which there are no guarantees about the successful execution of the applications that are executed in the virtual machines. However, these platforms can move towards reliable service delivery despite the volatility of the computational resources on which it is supported. In this way, new applications can be executed and some guarantee in the service can be offered to its users. In this doctoral thesis we have made and refined a fault analysis, considering different desktop cloud and similar systems, emphasizing UnaCloud. As a result of this analysis, we found that desktop cloud systems present failures mainly in two moments: during the provisioning and during the execution of virtual machines. We have proposed an extended chain of threats and we have used it in the identification of the main faults that occur in these two phases to determine their causes (anomalies, interruptions and errors) and their consequences. In addition, we suggest some mitigation strategies to counteract anomalies and interruptions. We have compiled an initial set of strategies that allow us to mitigate the effects of the identified failures, we developed a functional prototype to respond to those failures using those strategies that have the greatest impact in our context and we evaluate their behavior. Thanks to this analysis, we found that the provisioning of virtual machines has significant limitations in scalability. Virtual images are large files whose transmission through the network is delayed and prone to failures. In addition, reliability problems begin when the number of virtual machines that are going to run on the system begins to grow, particularly because of the space they occupy on disk, the use of the network and the time it takes for provisioning. On the other hand, the actions carried out by the users of the computers, or the applications that they launch, can interrupt the execution of the virtual machines running in the desktop cloud and ruin the work done up to that moment. We have proposed a solution to mitigate the effects of failures in both the provisioning and the execution of virtual machines. First, implementation of a new virtual machine provisioning model for a desktop cloud based on a catalog of preconfigured virtual images, stored in multitasking writing disks and previously stored in the computers where the virtual machines run. Second, we have developed a global snapshot solution to store the state of a distributed system that runs in the virtual machines of a desktop cloud. Our implementation allows us to obtain multiple global snapshots and a mechanism to resume execution consistently from any of them, without missing or duplicated messages. We have validated the software developed through functional and performance tests and verified that the proposed solutions can be used to improve the reliability of a desktop cloud system.

    @PhdThesis{gomez-phd-2019,
    author = {Carlos Gomez},
    title = {{Reliability in Desktop Cloud Systems}},
    school = {Universidad de los Andes},
    year = 2019,
    pdf = {http://wcl.cs.rpi.edu/theses/gomez_phd.pdf},
    keywords = {cloud computing, distributed systems, desktop clouds},
    abstract = {Desktop cloud is an opportunistic platform that provides cloud computing services on desktop computers, typically located on a university or business campus. The desktop cloud systems take advantage of the idle resources of computers when their users perform routine activities, or when computers are fully available. A desktop cloud manages these resources to run virtual machines, with their operating systems and applications, without affecting the performance perceived by the users of the computers. Virtual machines allow users of a desktop cloud, typically researchers, to execute their academic or scientific applications at the same time as the processes of the users of desktop computers. Since the infrastructure of a desktop cloud is based on non-dedicated computers, these systems are more susceptible to fail, compared to traditional cloud computing providers. These platforms must face the interruptions and interference caused by the users of desktop computers and their applications. For example, the users of the computers can turn them off or restart them, disconnect them from the network, or execute demanding applications in computational capacity affecting the normal execution of virtual machines. Desktop clouds have been used successfully in the execution of bag-of-tasks type applications, where a problem is solved by dividing it into independent tasks that run in parallel. In this type of applications, since the applications that run are independent, any of them can be executed on another physical machine, if a node on the platform fails. Other applications, with processes that communicate with each other, for example applications based on Message Passing Interface (MPI), are more fragile when facing failures, since a failure in one node can affect the entire system. There are several implementations of desktop clouds or similar systems, such as CernVM, cuCloud, ad hoc cloud computing, GBAC and UnaCloud, our case study. These systems take advantage of idle resources of the participating computers for the execution of virtual machines. Due to the opportunistic access to resources, in general, desktop cloud systems are platforms that offer a best effort delivery service, in which there are no guarantees about the successful execution of the applications that are executed in the virtual machines. However, these platforms can move towards reliable service delivery despite the volatility of the computational resources on which it is supported. In this way, new applications can be executed and some guarantee in the service can be offered to its users. In this doctoral thesis we have made and refined a fault analysis, considering different desktop cloud and similar systems, emphasizing UnaCloud. As a result of this analysis, we found that desktop cloud systems present failures mainly in two moments: during the provisioning and during the execution of virtual machines. We have proposed an extended chain of threats and we have used it in the identification of the main faults that occur in these two phases to determine their causes (anomalies, interruptions and errors) and their consequences. In addition, we suggest some mitigation strategies to counteract anomalies and interruptions. We have compiled an initial set of strategies that allow us to mitigate the effects of the identified failures, we developed a functional prototype to respond to those failures using those strategies that have the greatest impact in our context and we evaluate their behavior. Thanks to this analysis, we found that the provisioning of virtual machines has significant limitations in scalability. Virtual images are large files whose transmission through the network is delayed and prone to failures. In addition, reliability problems begin when the number of virtual machines that are going to run on the system begins to grow, particularly because of the space they occupy on disk, the use of the network and the time it takes for provisioning. On the other hand, the actions carried out by the users of the computers, or the applications that they launch, can interrupt the execution of the virtual machines running in the desktop cloud and ruin the work done up to that moment. We have proposed a solution to mitigate the effects of failures in both the provisioning and the execution of virtual machines. First, implementation of a new virtual machine provisioning model for a desktop cloud based on a catalog of preconfigured virtual images, stored in multitasking writing disks and previously stored in the computers where the virtual machines run. Second, we have developed a global snapshot solution to store the state of a distributed system that runs in the virtual machines of a desktop cloud. Our implementation allows us to obtain multiple global snapshots and a mechanism to resume execution consistently from any of them, without missing or duplicated messages. We have validated the software developed through functional and performance tests and verified that the proposed solutions can be used to improve the reliability of a desktop cloud system.} 
    }
    


Articles in journal, book chapters
  1. S. Shelton, H. Newberg, J. Weiss, J. Bauer, M. Arsenault, L. Widrow, C. Rayment, R. Judd, T. Desell, M. Magdon-Ismail, M. Newby, C. Rice, B. Szymanski, J. Thompson, C. Varela, B. Willett, S. Ulin, and L. Newberg. An Algorithm for Reconstructing the Orphan Stream Progenitor with MilkyWay@home Volunteer Computing. pp 26, 2019. Note: Accepted May 2019, to appear. Keyword(s): distributed systems, cloud computing.
    Abstract:
    We have developed a method for estimating the properties of the progenitor dwarf galaxy from the tidal stream of stars that were ripped from it as it fell into the Milky Way. In particular, we show that the mass and radial profile of a progenitor dwarf galaxy evolved along the orbit of the Orphan Stream, including the stellar and dark matter components, can be reconstructed from the distribution of stars in the tidal stream it produced. We use MilkyWay@home, a PetaFLOPS-scale distributed supercomputer, to optimize our dwarf galaxy parameters until we arrive at best-fit parameters. The algorithm fits the dark matter mass, dark matter radius, stellar mass, radial profile of stars, and orbital time. The parameters are recovered even though the dark matter component extends well past the half light radius of the dwarf galaxy progenitor, proving that we are able to extract information about the dark matter halos of dwarf galaxies from the tidal debris. Our simulations assumed that the Milky Way potential, dwarf galaxy orbit, and the form of the density model for the dwarf galaxy were known exactly; more work is required to evaluate the sources of systematic error in fitting real data. This method can be used to estimate the dark matter content in dwarf galaxies without the assumption of virial equilibrium that is required to estimate the mass using line-of-sight velocities. This demonstration is a first step towards building an infrastructure that will fit the Milky Way potential using multiple tidal streams.

    @Article{shelton-apj-2019,
    author = {S. Shelton and H. Newberg and J. Weiss and J. Bauer and M. Arsenault and L. Widrow and C. Rayment and R. Judd and T. Desell and M. Magdon-Ismail and M. Newby and C. Rice and B. Szymanski and J. Thompson and C. Varela and B. Willett and S. Ulin and L. Newberg},
    title = {An Algorithm for Reconstructing the Orphan Stream Progenitor with MilkyWay@home Volunteer Computing},
    booktitle = { The Astrophysical Journal},
    pages = {26},
    year = {2019},
    url = {http://wcl.cs.rpi.edu/papers/Nbody_ApJ_19.pdf},
    note = {Accepted May 2019, to appear},
    keywords = {distributed systems, cloud computing},
    abstract = {We have developed a method for estimating the properties of the progenitor dwarf galaxy from the tidal stream of stars that were ripped from it as it fell into the Milky Way. In particular, we show that the mass and radial profile of a progenitor dwarf galaxy evolved along the orbit of the Orphan Stream, including the stellar and dark matter components, can be reconstructed from the distribution of stars in the tidal stream it produced. We use MilkyWay@home, a PetaFLOPS-scale distributed supercomputer, to optimize our dwarf galaxy parameters until we arrive at best-fit parameters. The algorithm fits the dark matter mass, dark matter radius, stellar mass, radial profile of stars, and orbital time. The parameters are recovered even though the dark matter component extends well past the half light radius of the dwarf galaxy progenitor, proving that we are able to extract information about the dark matter halos of dwarf galaxies from the tidal debris. Our simulations assumed that the Milky Way potential, dwarf galaxy orbit, and the form of the density model for the dwarf galaxy were known exactly; more work is required to evaluate the sources of systematic error in fitting real data. This method can be used to estimate the dark matter content in dwarf galaxies without the assumption of virial equilibrium that is required to estimate the mass using line-of-sight velocities. This demonstration is a first step towards building an infrastructure that will fit the Milky Way potential using multiple tidal streams.} 
    }
    


Conference articles
  1. E. Blasch, J. D. Ashdown, C. A. Varela, F. Kopsaftopoulos, and R. H. Newkirk. Dynamic Data Driven Analytics for Multi-domain Environments. In Next-Generation Sensor Systems and Applications Track at 2019 SPIE Defense + Commercial Sensing Conference, Baltimore, MD, April 2019. Keyword(s): data streaming, dddas, cyber physical systems.
    Abstract:
    Recent trends in artificial intelligence and machine learning (AI/ML), dynamic data driven application systems (DDDAS), and cloud computing provide opportunities for enhancing multidomain systems performance. The DDDAS framework utilizes models, measurements, and computation to enhance real-time sensing, performance, and analysis. One example the represents a multi-domain scenario is “fly-by-feel” avionics systems that can support autonomous operations. A "fly-by-feel" system measures the aerodynamic forces (wind, pressure, temperature) for physics-based adaptive flight control to increase maneuverability, safety and fuel efficiency. This paper presents a multidomain approach that identifies safe flight operation platform position needs from which models, data, and information are invoked for effective multidomain control. Concepts are presented to demonstrate the DDDAS approach for enhanced multi-domain coordination bringing together modeling (data at rest), control (data in motion) and command (data in use).

    @InProceedings{blasch-spie-2019,
    author = {E. Blasch and J. D. Ashdown and C. A. Varela and F. Kopsaftopoulos and R. H. Newkirk},
    title = {Dynamic Data Driven Analytics for Multi-domain Environments},
    booktitle = {Next-Generation Sensor Systems and Applications Track at 2019 SPIE Defense + Commercial Sensing Conference},
    year = {2019},
    address = {Baltimore, MD},
    month = {April},
    url = {http://wcl.cs.rpi.edu/papers/SPIE19_Multidomain.pdf},
    keywords = {data streaming, dddas, cyber physical systems},
    abstract = {Recent trends in artificial intelligence and machine learning (AI/ML), dynamic data driven application systems (DDDAS), and cloud computing provide opportunities for enhancing multidomain systems performance. The DDDAS framework utilizes models, measurements, and computation to enhance real-time sensing, performance, and analysis. One example the represents a multi-domain scenario is “fly-by-feel” avionics systems that can support autonomous operations. A "fly-by-feel" system measures the aerodynamic forces (wind, pressure, temperature) for physics-based adaptive flight control to increase maneuverability, safety and fuel efficiency. This paper presents a multidomain approach that identifies safe flight operation platform position needs from which models, data, and information are invoked for effective multidomain control. Concepts are presented to demonstrate the DDDAS approach for enhanced multi-domain coordination bringing together modeling (data at rest), control (data in motion) and command (data in use).} 
    }
    


  2. S. Breese, F. Kopsaftopoulos, and C. A. Varela. Towards Proving Runtime Properties of Data-Driven Systems Using Safety Envelopes. In The 12th International Workshop on Structural Health Monitoring, Stanford, CA, September 2019. Keyword(s): cyber physical systems, dddas, formal verification.
    Abstract:
    Dynamic data-driven application systems (DDDAS) allow for unprecedented self-healing and self-diagnostic behavior across a broad swathe of domains. The usefulness of these systems is offset against their inherent complexity, and therefore fragility to specification or implementation error. Further, DDDAS techniques are often applied in safety-critical domains, where correctness is paramount. Formal methods facilitate the development of correctness proofs about software systems, which provide stronger behavioral guarantees than non-exhaustive unit tests. While unit testing can validate that a system behaves correctly in some finite number of configurations, formal methods enable us to prove correctness in an infinite subset of the configuration space, which is often needed in cyber-physical systems involving continuous mechanics. Although the efficacy of formal methods is traditionally offset by significantly greater development cost, we propose new development techniques that can mitigate this concern. In this paper, we explore novel techniques for assuring the correctness of data-driven systems based on certified programming and software verification. In particular, we focus on the use of interactive theorem-proving systems to prove foundational properties about data-driven systems, possibly reliant upon physics-based assumptions and models. We introduce the concept of the formal safety envelope, analogous to the concept of an aircraft’s performance envelope, which organizes system properties in a way that makes it clear which properties hold under which assumptions. Beyond maintaining modularity in proof development, this technique furthermore enables the derivation of runtime monitors to detect potentially unsafe system state changes, allowing the user to know precisely which properties have been verified to hold for the current system state. Using this method, we demonstrate the partial verification of an archetypal data-driven system from avionics, where wing sensor data is used to determine whether or not an airplane is likely to be in a stall state.

    @InProceedings{breese-iwshm-2019_,
    author = {S. Breese and F. Kopsaftopoulos and C. A. Varela},
    title = {Towards Proving Runtime Properties of Data-Driven Systems Using Safety Envelopes},
    booktitle = {The 12th International Workshop on Structural Health Monitoring},
    year = {2019},
    address = {Stanford, CA},
    month = {September},
    url = {http://wcl.cs.rpi.edu/papers/IWSHM19_brees.pdf},
    keywords = {cyber physical systems, dddas, formal verification},
    abstract = {Dynamic data-driven application systems (DDDAS) allow for unprecedented self-healing and self-diagnostic behavior across a broad swathe of domains. The usefulness of these systems is offset against their inherent complexity, and therefore fragility to specification or implementation error. Further, DDDAS techniques are often applied in safety-critical domains, where correctness is paramount. Formal methods facilitate the development of correctness proofs about software systems, which provide stronger behavioral guarantees than non-exhaustive unit tests. While unit testing can validate that a system behaves correctly in some finite number of configurations, formal methods enable us to prove correctness in an infinite subset of the configuration space, which is often needed in cyber-physical systems involving continuous mechanics. Although the efficacy of formal methods is traditionally offset by significantly greater development cost, we propose new development techniques that can mitigate this concern. In this paper, we explore novel techniques for assuring the correctness of data-driven systems based on certified programming and software verification. In particular, we focus on the use of interactive theorem-proving systems to prove foundational properties about data-driven systems, possibly reliant upon physics-based assumptions and models. We introduce the concept of the formal safety envelope, analogous to the concept of an aircraft’s performance envelope, which organizes system properties in a way that makes it clear which properties hold under which assumptions. Beyond maintaining modularity in proof development, this technique furthermore enables the derivation of runtime monitors to detect potentially unsafe system state changes, allowing the user to know precisely which properties have been verified to hold for the current system state. Using this method, we demonstrate the partial verification of an archetypal data-driven system from avionics, where wing sensor data is used to determine whether or not an airplane is likely to be in a stall state.} 
    }
    


  3. Camilo Castellanos, Boris Perez, Carlos A. Varela, Maria del Pilar Villamil, and Dario Correal. A Survey on Big Data Analytics Solutions Deployment. In Software Architecture, Cham, pages 195-210, September 2019. Springer International Publishing. ISBN: 978-3-030-29983-5. Keyword(s): distributed computing, distributed systems.
    Abstract:
    There are widespread and increasing interest in big data analytics (BDA) solutions to enable data collection, transformation, and predictive analyses. The development and operation of BDA application involve business innovation, advanced analytics and cutting-edge technologies which add new complexities to the traditional software development. Although there is a growing interest in BDA adoption, successful deployments are still scarce (a.k.a., the ``Deployment Gap'' phenomenon). This paper reports an empirical study on BDA deployment practices, techniques and tools in the industry from both the software architecture and data science perspectives to understand research challenges that emerge in this context. Our results suggest new research directions to be tackled by the software architecture community. In particular, competing architectural drivers, interoperability, and deployment procedures in the BDA field are still immature or have not been adopted in practice.

    @InProceedings{castellanos-bdasurvey-2019,
    author = "Castellanos, Camilo and Perez, Boris and Varela, Carlos A. and Villamil, Maria del Pilar and Correal, Dario",
    title = "A Survey on Big Data Analytics Solutions Deployment",
    booktitle = "Software Architecture",
    year = "2019",
    month = "September",
    keywords = {distributed computing, distributed systems},
    publisher = "Springer International Publishing",
    address = "Cham",
    pages = "195--210",
    pdf = {http://wcl.cs.rpi.edu/papers/BDA_survey_2019.pdf},
    abstract = "There are widespread and increasing interest in big data analytics (BDA) solutions to enable data collection, transformation, and predictive analyses. The development and operation of BDA application involve business innovation, advanced analytics and cutting-edge technologies which add new complexities to the traditional software development. Although there is a growing interest in BDA adoption, successful deployments are still scarce (a.k.a., the ``Deployment Gap'' phenomenon). This paper reports an empirical study on BDA deployment practices, techniques and tools in the industry from both the software architecture and data science perspectives to understand research challenges that emerge in this context. Our results suggest new research directions to be tackled by the software architecture community. In particular, competing architectural drivers, interoperability, and deployment procedures in the BDA field are still immature or have not been adopted in practice.",
    isbn = "978-3-030-29983-5" 
    }
    


  4. Camilo Castellanos, Carlos A. Varela, and Dario Correal. Measuring Performance Quality Scenarios in Big Data Analytics Applications: A DevOps and Domain-Specific Model Approach. In ECSA 2019 - International Workshop on Software Architecture Challenges in Big Data (SACBD), 2019. Keyword(s): distributed computing, distributed systems.
    Abstract:
    Big data analytics (BDA) applications use advanced analysis algorithms to extract valuable insights from large, fast, and heterogeneous data sources. These complex BDA applications require software design, development, and deployment strategies to deal with volume, velocity, and variety (3vs) while sustaining expected performance levels. BDA software complexity frequently leads to delayed deployments, longer development cycles and challenging performance monitoring. This paper proposes a DevOps and Domain Specific Model (DSM) approach to design, deploy, and monitor performance Quality Scenarios (QS) in BDA applications. This approach uses high-level abstractions to describe deployment strategies and QS enabling performance monitoring. Our experimentation compares the effort of development, deployment and QS monitoring of BDA applications with two use cases of near mid-air collisions (NMAC) detection. The use cases include different performance QS, processing models, and deployment strategies. Our results show shorter (re)deployment cycles and the fulfillment of latency and deadline QS for micro-batch and batch processing.

    @InProceedings{castellanos-icsa-2019,
    author = "Castellanos, Camilo and Varela, Carlos A. and Correal, Dario",
    title = "Measuring Performance Quality Scenarios in Big Data Analytics Applications: A DevOps and Domain-Specific Model Approach",
    booktitle = " ECSA 2019 - International Workshop on Software Architecture Challenges in Big Data (SACBD)",
    year = "2019",
    keywords = {distributed computing, distributed systems},
    pdf = {http://wcl.cs.rpi.edu/papers/BDA_2019.pdf},
    abstract = "Big data analytics (BDA) applications use advanced analysis algorithms to extract valuable insights from large, fast, and heterogeneous data sources. These complex BDA applications require software design, development, and deployment strategies to deal with volume, velocity, and variety (3vs) while sustaining expected performance levels. BDA software complexity frequently leads to delayed deployments, longer development cycles and challenging performance monitoring. This paper proposes a DevOps and Domain Specific Model (DSM) approach to design, deploy, and monitor performance Quality Scenarios (QS) in BDA applications. This approach uses high-level abstractions to describe deployment strategies and QS enabling performance monitoring. Our experimentation compares the effort of development, deployment and QS monitoring of BDA applications with two use cases of near mid-air collisions (NMAC) detection. The use cases include different performance QS, processing models, and deployment strategies. Our results show shorter (re)deployment cycles and the fulfillment of latency and deadline QS for micro-batch and batch processing." 
    }
    


  5. Shigeru Imai, Frederick Hole, and Carlos A. Varela. Self-Healing Data Streams Using Multiple Models of Analytical Redundancy. In The 38th AIAA/IEEE Digital Avionics Systems Conference (DASC 2019), San Diego, CA, September 2019. Keyword(s): pilots, data streaming, dddas, cyber physical systems.
    Abstract:
    We have created a highly declarative programming language called PILOTS that enables error detection and estimation of correct data streams based on analytical redundancy (i.e., algebraic relationship between data streams). Data scientists are able to express their analytical redundancy models with the domain specific grammar of PILOTS and test their models with erroneous data streams. PILOTS has the ability to express a single analytical redundancy, and it has been successfully applied to data from aircraft accidents such as Air France flight 447 and Tuninter flight 1153 where only one simultaneous sensor type failure was observed. In this work, we extend PILOTS to support multiple models of analytical redundancy and improve situational awareness for multiple simultaneous sensor type failures. Motivated by the two recent accidents involving the Boeing 737 Max 8, which was potentially caused by a faulty angle of attack sensor, we focus on recovering angle of attack data streams under multiple sensor type failure scenarios. The simulation results show that multiple models of analytical redundancy enable us to detect failure modes that are not detectable with a single model.

    @InProceedings{imai-dasc-2019,
    author = {Shigeru Imai and Frederick Hole and Carlos A. Varela },
    title = {Self-Healing Data Streams Using Multiple Models of Analytical Redundancy},
    booktitle = {The 38th AIAA/IEEE Digital Avionics Systems Conference (DASC 2019)},
    year = {2019},
    address = {San Diego, CA},
    month = {September},
    url = {http://wcl.cs.rpi.edu/papers/DASC2019_imai.pdf},
    keywords = {pilots, data streaming, dddas, cyber physical systems},
    abstract = {We have created a highly declarative programming language called PILOTS that enables error detection and estimation of correct data streams based on analytical redundancy (i.e., algebraic relationship between data streams). Data scientists are able to express their analytical redundancy models with the domain specific grammar of PILOTS and test their models with erroneous data streams. PILOTS has the ability to express a single analytical redundancy, and it has been successfully applied to data from aircraft accidents such as Air France flight 447 and Tuninter flight 1153 where only one simultaneous sensor type failure was observed. In this work, we extend PILOTS to support multiple models of analytical redundancy and improve situational awareness for multiple simultaneous sensor type failures. Motivated by the two recent accidents involving the Boeing 737 Max 8, which was potentially caused by a faulty angle of attack sensor, we focus on recovering angle of attack data streams under multiple sensor type failure scenarios. The simulation results show that multiple models of analytical redundancy enable us to detect failure modes that are not detectable with a single model.} 
    }
    


  6. Heidi Jo Newberg, Siddhartha Shelton, Eric Mendelsohn, Jake Weiss, Matthew Arsenault, Jacob S. Bauer, Travis Desell, Roland Judd, Malik Magdon-Ismail, Lee A. Newberg, Matthew Newby, Clayton Rayment, Colin Rice, Boleslaw K. Szymanski, Jeffery M. Thompson, Steve Ulin, Carlos Varela, Lawrence M. Widrow, and Benjamin A. Willett. Streams and the Milky Way dark matter halo. In , volume 14, pages 75-82, June 2019. Note: Publisher: Cambridge University Press. ISSN: 1743-9213, 1743-9221. Keyword(s): cosmology: dark matter, Galaxy: halo, methods: n-body simulations.
    Abstract:
    We describe an algorithm that can fit the properties of the dwarf galaxy progenitor of a tidal stream, given the properties of that stream. We show that under ideal conditions (the Milky Way potential, the orbit of the dwarf galaxy progenitor, and the functional form of the dwarf galaxy progenitor are known exactly), the density and angular width of stars along the stream can be used to constrain the mass and radial profile of both the stellar and dark matter components of the progenitor dwarf galaxy that was ripped apart to create the stream. Our provisional fit for the parameters of the dwarf galaxy progenitor of the Orphan Stream indicates that it is less massive and has fewer stars than previous works have indicated.

    @InProceedings{newberg-streams-2019,
    title = {Streams and the {Milky} {Way} dark matter halo},
    volume = {14},
    issn = {1743-9213, 1743-9221},
    url = {http://wcl.cs.rpi.edu/papers/IAU-2019.pdf},
    doi = {10.1017/S174392131900855X},
    abstract = {We describe an algorithm that can fit the properties of the dwarf galaxy progenitor of a tidal stream, given the properties of that stream. We show that under ideal conditions (the Milky Way potential, the orbit of the dwarf galaxy progenitor, and the functional form of the dwarf galaxy progenitor are known exactly), the density and angular width of stars along the stream can be used to constrain the mass and radial profile of both the stellar and dark matter components of the progenitor dwarf galaxy that was ripped apart to create the stream. Our provisional fit for the parameters of the dwarf galaxy progenitor of the Orphan Stream indicates that it is less massive and has fewer stars than previous works have indicated.},
    number = {S353},
    journal = {Proceedings of the International Astronomical Union},
    author = {Newberg, Heidi Jo and Shelton, Siddhartha and Mendelsohn, Eric and Weiss, Jake and Arsenault, Matthew and Bauer, Jacob S. and Desell, Travis and Judd, Roland and Magdon-Ismail, Malik and Newberg, Lee A. and Newby, Matthew and Rayment, Clayton and Rice, Colin and Szymanski, Boleslaw K. and Thompson, Jeffery M. and Ulin, Steve and Varela, Carlos and Widrow, Lawrence M. and Willett, Benjamin A.},
    month = jun,
    year = {2019},
    note = {Publisher: Cambridge University Press},
    keywords = {cosmology: dark matter, Galaxy: halo, methods: n-body simulations},
    pages = {75--82} 
    }
    


  7. Saswata Paul, Stacy Patterson, and Carlos A. Varela. Conflict-Aware Flight Planning for Avoiding Near Mid-Air Collisions. In The 38th AIAA/IEEE Digital Avionics Systems Conference (DASC 2019), San Diego, CA, September 2019. Note: Nominated for best student paper award. Keyword(s): cyber physical systems, athena, air traffic management, formal verification.
    Abstract:
    We present a novel conflict-aware flight planning approach that avoids the possibility of near mid-air collisions (NMACs) in the flight planning stage. Our algorithm computes a valid flight-plan for an aircraft (ownship) based on a starting time, a set of discrete way-points in 3D space, discrete values of ground speed, and a set of available flight-plans for traffic aircraft. A valid solution is one that avoids loss of standard separation with available traffic flight-plans. Solutions are restricted to permutations of constant ground speed and constant vertical speed for the ownship between consecutive waypoints. Since the course between two consecutive way-points is not changed, this strategy can be used in situations where vertical or lateral constraints due to terrain or weather may restrict deviations from the original flight-plan. This makes our approach particularly suitable for unmanned aerial systems (UAS) integration into urban air traffic management airspace. Our approach has been formally verified using the Athena proof assistant. Our work, therefore, complements the state-of-the-art pairwise tactical conflict resolution approaches by enabling an ownship to generate strategic flight-plans that ensure standard separation with multiple traffic aircraft, while conforming to possible restrictions on deviation from its flight path.

    @InProceedings{paul-dasc-2019,
    author = {Saswata Paul and Stacy Patterson and Carlos A. Varela },
    title = {Conflict-Aware Flight Planning for Avoiding Near Mid-Air Collisions},
    booktitle = {The 38th AIAA/IEEE Digital Avionics Systems Conference (DASC 2019)},
    year = {2019},
    address = {San Diego, CA},
    month = {September},
    url = {http://wcl.cs.rpi.edu/papers/DASC2019_paul.pdf},
    keywords = {cyber physical systems, athena, air traffic management, formal verification},
    note = {Nominated for best student paper award},
    abstract = {We present a novel conflict-aware flight planning approach that avoids the possibility of near mid-air collisions (NMACs) in the flight planning stage. Our algorithm computes a valid flight-plan for an aircraft (ownship) based on a starting time, a set of discrete way-points in 3D space, discrete values of ground speed, and a set of available flight-plans for traffic aircraft. A valid solution is one that avoids loss of standard separation with available traffic flight-plans. Solutions are restricted to permutations of constant ground speed and constant vertical speed for the ownship between consecutive waypoints. Since the course between two consecutive way-points is not changed, this strategy can be used in situations where vertical or lateral constraints due to terrain or weather may restrict deviations from the original flight-plan. This makes our approach particularly suitable for unmanned aerial systems (UAS) integration into urban air traffic management airspace. Our approach has been formally verified using the Athena proof assistant. Our work, therefore, complements the state-of-the-art pairwise tactical conflict resolution approaches by enabling an ownship to generate strategic flight-plans that ensure standard separation with multiple traffic aircraft, while conforming to possible restrictions on deviation from its flight path.} 
    }
    


Miscellaneous
  1. Carlos A. Varela. Too many airplane systems rely on too few sensors, April 2019. Note: Article published in TheConversation.com. Keyword(s): cyber physical systems, dddas.
    @Misc{varela-conversation-2019,
    author = {Carlos A. Varela},
    title = {Too many airplane systems rely on too few sensors},
    booktitle = { www.theconversation.com},
    year = {2019},
    month = {April},
    url = {http://theconversation.com/too-many-airplane-systems-rely-on-too-few-sensors-114394},
    note = {Article published in TheConversation.com},
    keywords = {cyber physical systems, dddas} 
    }
    



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Les documents contenus dans ces rpertoires sont rendus disponibles par les auteurs qui y ont contribu en vue d'assurer la diffusion temps de travaux savants et techniques sur une base non-commerciale. Les droits de copie et autres droits sont gards par les auteurs et par les dtenteurs du copyright, en dpit du fait qu'ils prsentent ici leurs travaux sous forme lectronique. Les personnes copiant ces informations doivent adhrer aux termes et contraintes couverts par le copyright de chaque auteur. Ces travaux ne peuvent pas tre rendus disponibles ailleurs sans la permission explicite du dtenteur du copyright.




Last modified: Wed Apr 3 16:12:48 2024
Author: led2.


This document was translated from BibTEX by bibtex2html