SAM 2025

Keynotes & Programme

Please refer to the MODELS 2025 programme for additional information.

Clicking on a presentation title below takes you to a YouTube video for the paper presentation.

Clicking on the accepted paper title on the Accepted Papers page takes you to the IEEE Computer Society Digital Library page for the paper.

Note: all times listed below are EST.

Monday October 6

  • 08:30 Welcome & Keynote
    • Conference Opening
      Erik Fredericks and Eugene Syriani
    • Keynote

      Multidisciplinary Model-Based Approaches to Assurance for Safety-Critical Learning-Enabled Autonomous Systems.

      Trustworthy artificial intelligence (Trusted AI) is essential when autonomous, safety-critical systems use learning-enabled components (LECs) in uncertain environments. When reliant on deep learning, these learning-enabled autonomous systems (LEAS) must address the reliability, interpretability, and robustness (collectively, the assurance) of learning models. Three types of uncertainty most significantly affect assurance. First, uncertainty about the physical environment can cause suboptimal, and sometimes catastrophic, results as the system struggles to adapt to unanticipated or poorly-understood environmental conditions. For example, when lane markings are occluded (either on the camera and/or the physical lanes), lane management functionality can be critically compromised. Second, uncertainty in the cyber environment can create unexpected and adverse consequences, including not only performance impacts (network load, real-time responses, etc.) but also potential threats or overt (cybersecurity) attacks. Third, uncertainty associated with the data used to train and validate AI components have the potential to not only cause LECs to fail unexpectedly, but they also can provide a false sense of trust with interacting components and the stakeholders. While learning-enabled technologies have made great strides in addressing uncertainty, challenges remain in addressing the assurance of such systems when encountering uncertainty not addressed in training data. Furthermore, we need to consider LEASs as first-class software-based systems that should be rigorously developed, verified, and maintained (i.e., software engineered). In addition to developing specific strategies to address these concerns, appropriate software frameworks are needed to coordinate LECs and ensure they deliver acceptable behavior even under uncertain conditions. We further posit that due to the increasing complexity of the LEASs and the lack of code-based artifacts, it becomes imperative to take a model-based approach to address LEAS assurance. To this end, this presentation overviews a number of our multi-disciplinary research projects involving industrial collaborators, which collectively support a software engineering, model-based approach to address Trusted AI and provide assurance for learning-enabled autonomous systems. In addition to sharing lessons learned from more than two decades of research addressing assurance for autonomous systems, near-term and longer-term research challenges for learning-enabled AI safety-critical autonomous systems will be overviewed.

      Betty H.C. Cheng is a Professor in the Department of Computer Science and Engineering at Michigan State University. Her research focuses on trusted AI, automated software engineering, self-adaptive systems, requirements engineering, model-driven engineering, and automotive cyber security, with applications to intelligent transportation and vehicle systems. She collaborates extensively with industry to facilitate technology transfer. Her work has been funded by NSF, ONR, DARPA, NASA, AFRL, ARO, and numerous industrial partners. She is an Associate Editor-in-Chief for IEEE Transactions on Software Engineering and serves on the editorial boards of Requirements Engineering Journal and Software and Systems Modeling. She was Technical Program Co-Chair of ICSE 2013, the flagship conference in software engineering. She received her BS from Northwestern University and her MS and PhD from the University of Illinois Urbana-Champaign, all in computer science. More details: https://www.cse.msu.edu/~chengb.

  • 09:40 Vision for the Future of Engineering Systems
    • Modeling: The Heart and Soul of Engineering Smart Ecosystems.
      Antonio Bucchiarone, Benoit Combemale, Alfonso Pierantonio, Nelly Bencomo, Mark van den Brand, Jean-Michel Bruel, Antonio Cicchetti, Juri Di Rocco, Leen Lambers, Judith Michael, Bernhard Rumpe, Mikael Sjodin, Gabriele Taentzer, Matthias Tichy, Hans Vangheluwe, Manuel Wimmer and Steffen Zschaler
  • 10:00 Coffee Break
  • 10:30 Session: Traceability and Verification
  • 12:00 Lunch
  • 13:30 Session: Systems Engineering
  • 15:00 Coffee Break
  • 15:30 Session: Formalization

Tuesday October 7