| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| en:safeav:curriculum:softsys-m [2025/10/21 06:01] – [Table] larisas | en:safeav:curriculum:softsys-m [2025/11/05 09:15] (current) – airi |
|---|
| ====== Module: Software Systems and Middleware (Part 2) ====== | ====== Module: Software Systems and Middleware (Part 2) ====== |
| | **Study level** | Master || | |
| | **ECTS credits** | 1 ECTS || | ^ **Study level** | Master | |
| | **Study forms** | Hybrid or fully online || | ^ **ECTS credits** | 1 ECTS | |
| | **Module aims** | The module aims to provide students with an advanced understanding of software system verification, validation, and testing in the context of autonomous, cyber-physical, and AI-driven systems. It explores traditional physics-based execution (PBE) and decision-based execution (DBE) paradigms, linking them to governance and safety-critical frameworks such as ISO 26262, AS9100, and IEEE 2846. Students will learn to apply structured validation processes, evaluate AI component verification methods, and assess the transition from deterministic to data-driven software systems. The module also examines regulatory and ethical implications of AI in safety-critical environments and the challenges of achieving trust and transparency in AI-based software architectures. || | ^ **Study forms** | Hybrid or fully online | |
| | **Pre-requirements** | A strong foundation in software engineering, control systems, and embedded architectures. Familiarity with programming languages such as Python, C++, or MATLAB, and knowledge of system design and testing methodologies. Prior exposure to safety-critical standards (ISO 26262, DO-178C, or IEC 61508), AI/ML algorithms, and version control practices is recommended. || | ^ **Module aims** | The aim of the module is to introduce software verification, validation and testing methods for autonomous, cyber-physical and AI-based systems. The course develops students’ ability to plan, implement and assess V&V strategies across physics-based and data-driven software, in line with relevant safety and governance standards. | |
| | **Learning outcomes** | Knowledge:\\ • Explain the principles of verification and validation (V&V) in both physics-based and decision-based execution systems.\\ • Describe software testing frameworks, including component, integration, and system-level approaches.\\ • Understand regulatory standards (ISO 26262, AS9100, CMMI) and their role in defining safety and assurance levels.\\ • Analyze challenges in AI component validation, including training set verification, robustness testing, and anti-specification frameworks.\\ Skills:\\ • Develop and execute structured test plans and coverage analyses for complex, data-driven systems.\\ • Use simulation tools to generate and evaluate test scenarios for AI-based and safety-critical applications.\\ • Apply V&V techniques to assess software reliability and traceability across development lifecycles.\\ • Critically evaluate AI model performance using robustness, fairness, and explainability metrics.\\ Understanding/Attitudes:\\ • Appreciate the philosophical and practical differences between deterministic (PBE) and non-deterministic (DBE) testing paradigms.\\ • Recognize the ethical and governance implications of AI deployment in safety-critical systems.\\ • Demonstrate interdisciplinary reasoning across engineering, regulatory, and societal domains when designing and testing autonomous software systems. || | ^ **Pre-requirements** | Basic knowledge of software engineering, control or embedded systems and programming skills. Familiarity with system design, testing methodologies, AI/ML concepts or safety-related standards is recommended but not mandatory. | |
| | ** Topics ** | 1. Verification and Validation Fundamentals:\\ – Overview of PBE vs DBE paradigms, fault analysis, and safety argument structures.\\ – Introduction to structured testing: unit, integration, and system-level testing.\\ 2. Safety-Critical Standards and Governance:\\ – ISO 26262 (Automotive), AS9100 (Aerospace), and CMMI frameworks.\\ – Automotive Safety Integrity Levels (ASIL) and Design Assurance Levels (DALs).\\ 3. Software Testing and Coverage:\\ – Code coverage, pseudo-random test generation, and scenario-based validation.\\ – Role of simulation, fault injection, and test automation.\\ 4. AI Component Validation:\\ – AI vs Software validation differences; coverage, code review, and data governance.\\ – Training set validation, robustness to noise, and explainable AI.\\ 5. Specification and Anti-Specification Challenges:\\ – IEEE 2846 and AI driver concepts; ethical, legal, and liability considerations.\\ – Human-equivalent testing and performance evaluation frameworks.\\ 6. Emerging V&V Trends:\\ – Continuous integration (CI/CD), simulation-in-the-loop (SIL/HIL), and AI-assisted verification.\\ – Case studies: Automotive ADAS, aviation autonomy, and robotics. || | ^ **Learning outcomes** | **Knowledge**\\ • Explain the principles of V&V in both physics-based and decision-based execution systems.\\ • Describe software testing frameworks, including component, integration, and system-level approaches.\\ • Understand regulatory standards and their role in defining safety and assurance levels.\\ • Analyze challenges in AI component validation, including training set verification, robustness testing, and anti-specification frameworks.\\ **Skills**\\ • Develop and execute structured test plans and coverage analyses for complex, data-driven systems.\\ • Use simulation tools to generate and evaluate test scenarios for AI-based and safety-critical applications.\\ • Apply V&V techniques to assess software reliability and traceability across development lifecycles.\\ • Critically evaluate AI model performance using robustness, fairness, and explainability metrics.\\ **Understanding**\\ • Appreciate the philosophical and practical differences between deterministic and non-deterministic testing paradigms.\\ • Recognize the ethical and governance implications of AI deployment in safety-critical systems.\\ • Demonstrate interdisciplinary reasoning across engineering, regulatory, and societal domains when designing and testing autonomous software systems. | |
| | **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation. || | ^ **Topics** | 1. Verification and Validation Fundamentals:\\ – Overview of PBE vs DBE paradigms, fault analysis, and safety argument structures.\\ – Introduction to structured testing: unit, integration, and system-level testing.\\ 2. Safety-Critical Standards and Governance:\\ – ISO 26262 (Automotive), AS9100 (Aerospace), and CMMI frameworks.\\ – Automotive Safety Integrity Levels and Design Assurance Levels.\\ 3. Software Testing and Coverage:\\ – Code coverage, pseudo-random test generation, and scenario-based validation.\\ – Role of simulation, fault injection, and test automation.\\ 4. AI Component Validation:\\ – AI vs Software validation differences; coverage, code review, and data governance.\\ – Training set validation, robustness to noise, and explainable AI.\\ 5. Specification and Anti-Specification Challenges:\\ – IEEE 2846 and AI driver concepts; ethical, legal, and liability considerations.\\ – Human-equivalent testing and performance evaluation frameworks.\\ 6. Emerging V&V Trends:\\ – Continuous integration, simulation-in-the-loop, and AI-assisted verification.\\ – Case studies: Automotive ADAS, aviation autonomy, and robotics. | |
| | **Learning methods** | Lectures: Present theoretical underpinnings of software and AI testing, covering safety-critical standards and AI V&V challenges.\\ Lab works: Practical exercises in automated testing, simulation-driven validation, and robustness evaluation using Python/ROS/MATLAB.\\ Individual assignments: Develop and analyze test strategies, evaluate compliance with ISO/IEEE frameworks, and submit technical reports.\\ Self-learning: Review international standards, research literature, and case studies of AI validation in autonomous domains. || | ^ **Type of assessment** | The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation. | |
| | **AI involvement** | Yes — AI tools can assist in generating test cases, simulating complex operational scenarios, and analyzing coverage gaps. Students must validate AI-generated results, maintain traceability, and document AI involvement transparently in compliance with academic ethics. | | | ^ **Learning methods** | **Lecture** — Present theoretical underpinnings of software and AI testing, covering safety-critical standards and AI V&V challenges.\\ **Lab works** — Practical exercises in automated testing, simulation-driven validation, and robustness evaluation using Python/ROS/MATLAB.\\ **Individual assignments** — Develop and analyze test strategies, evaluate compliance with ISO/IEEE frameworks, and submit technical reports.\\ **Self-learning** — Review international standards, research literature, and case studies of AI validation in autonomous domains. | |
| | **References to\\ literature** | 1. ISO 26262 (2018). Road Vehicles – Functional Safety.\\ 2. AS9100D (2016). Quality Management Systems – Aerospace.\\ 3. CMMI Institute (2018). Capability Maturity Model Integration (CMMI) v2.0.\\ 4. Koopman, P., & Widen, J. (2023). The AI Driver: Defining Human-Equivalent Safety for Automated Vehicles. IEEE Transactions on Intelligent Vehicles.\\ 5. IEEE 2846 (2022). Assumptions for Models in Safety-Related Automated Vehicle Behavior.\\ 6. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). MIT Press.\\ 7. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.\\ 8. Razdan, R., & Sell, R. (2025). Validation Frameworks for Cyber-Physical and AI-Based Systems. IEEE Access (forthcoming). || | ^ **AI involvement** | AI tools can assist in generating test cases, simulating complex operational scenarios, and analyzing coverage gaps. Students must validate AI-generated results, maintain traceability, and document AI involvement transparently in compliance with academic ethics. | |
| | **Lab equipment** | Yes || | ^ **Recommended tools and environments** | ROS, MATLAB | |
| | **Virtual lab** | Yes || | ^ **Verification and Validation focus** | | |
| | **MOOC course** | Suggested MOOC: 'Software Verification and Validation' (Coursera, University of Minnesota) or 'AI Safety and Governance' (edX, University of Cambridge). || | ^ **Relevant standards and regulatory frameworks** | ISO 26262, AS9100, CMMI, IEEE 2846 | |
| |