Collaborative Research: SLES: Foundations of Qualitative and Quantitative Safety Assessment of Learning-enabled Systems

Project: Research project

Project Details

Description

Learning-enabled autonomous systems operating in unfamiliar or unprecedented environments pose new foundational challenges for their safety assessment and subsequent risk management. In this context, the system-level safety means the complicated behaviors created by the interactions between multiple learning components and the physical world satisfy the safety requirements, protecting the system from accidental failures to avoid hazards such as collisions to other vehicles, bicycles and pedestrians. The qualitative and quantitative methodologies envisioned to complement each other by providing both 'yes' or 'no' binary decisions and numerical measures of safety, which allow for a thorough understanding of safety concerns and enable effective safety verification in uncertain environments. This project targets the foundational challenges of developing qualitative and quantitative safety assessment methods capable of capturing uncertainties from environments and providing timely, comprehensive, and accurate safety evaluations at the system level. The outcomes are expected to boost the trustworthiness and adaptability of learning-enabled systems to the unknown world and facilitate their safe integration into various domains, such as autonomous vehicles, robotics, or industrial automation. Educational and outreach activities are well-integrated into the research, including curriculum development, K-12 STEM outreach, and industrial engagement activities. The designed activities are uniquely positioned to promote diversity throughout this project by giving priority consideration, mentoring, and working with students in underrepresented minority groups. The proposed research efforts will be directed toward building the foundations of end-to-end qualitative and quantitative safety assessment of learning-enabled autonomous systems. This project will develop the probabilistic star temporal logic specification language. The new specification language offers a formalism for expressive modeling of learning process uncertainty and complex temporal behaviors, and supports both qualitative and quantitative reasoning. Efficient computation methods and tools will be developed to verify probabilistic star temporal logic specifications for learning-enabled deep neural network components. The verification methods and tools are centered on enhancing their scalability and resource efficiency. This project will develop system-level qualitative and quantitative safety assessment methods and tools that can handle the interplay of various learning-enabled components in a system under different availability of environment information. Learning-enabled F1Tenth testbed, a small-scale system of real autonomous vehicles and its simulator, will be used to create multiple real-world autonomous driving scenarios to validate and evaluate the applicability, scalability and reliability of the proposed methods and tools.This research is supported by a partnership between the National Science Foundation and Open Philanthropy.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
StatusActive
Effective start/end date12/1/2311/30/26

Funding

  • National Science Foundation: $270,913.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.