This is an old revision of the document!


Module: Software Systems and Middleware (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The module aims to provide students with an advanced understanding of software system verification, validation, and testing in the context of autonomous, cyber-physical, and AI-driven systems. It explores traditional physics-based execution (PBE) and decision-based execution (DBE) paradigms, linking them to governance and safety-critical frameworks such as ISO 26262, AS9100, and IEEE 2846. Students will learn to apply structured validation processes, evaluate AI component verification methods, and assess the transition from deterministic to data-driven software systems. The module also examines regulatory and ethical implications of AI in safety-critical environments and the challenges of achieving trust and transparency in AI-based software architectures.
Pre-requirements A strong foundation in software engineering, control systems, and embedded architectures. Familiarity with programming languages such as Python, C++, or MATLAB, and knowledge of system design and testing methodologies. Prior exposure to safety-critical standards (ISO 26262, DO-178C, or IEC 61508), AI/ML algorithms, and version control practices is recommended.
Learning outcomes Knowledge
• Explain the principles of verification and validation (V&V) in both physics-based and decision-based execution systems.
• Describe software testing frameworks, including component, integration, and system-level approaches.
• Understand regulatory standards (ISO 26262, AS9100, CMMI) and their role in defining safety and assurance levels.
• Analyze challenges in AI component validation, including training set verification, robustness testing, and anti-specification frameworks.
Skills
• Develop and execute structured test plans and coverage analyses for complex, data-driven systems.
• Use simulation tools to generate and evaluate test scenarios for AI-based and safety-critical applications.
• Apply V&V techniques to assess software reliability and traceability across development lifecycles.
• Critically evaluate AI model performance using robustness, fairness, and explainability metrics.
Understanding
• Appreciate the philosophical and practical differences between deterministic (PBE) and non-deterministic (DBE) testing paradigms.
• Recognize the ethical and governance implications of AI deployment in safety-critical systems.
• Demonstrate interdisciplinary reasoning across engineering, regulatory, and societal domains when designing and testing autonomous software systems.
Topics 1. Verification and Validation Fundamentals:
– Overview of PBE vs DBE paradigms, fault analysis, and safety argument structures.
– Introduction to structured testing: unit, integration, and system-level testing.
2. Safety-Critical Standards and Governance:
– ISO 26262 (Automotive), AS9100 (Aerospace), and CMMI frameworks.
– Automotive Safety Integrity Levels (ASIL) and Design Assurance Levels (DALs).
3. Software Testing and Coverage:
– Code coverage, pseudo-random test generation, and scenario-based validation.
– Role of simulation, fault injection, and test automation.
4. AI Component Validation:
– AI vs Software validation differences; coverage, code review, and data governance.
– Training set validation, robustness to noise, and explainable AI.
5. Specification and Anti-Specification Challenges:
– IEEE 2846 and AI driver concepts; ethical, legal, and liability considerations.
– Human-equivalent testing and performance evaluation frameworks.
6. Emerging V&V Trends:
– Continuous integration (CI/CD), simulation-in-the-loop (SIL/HIL), and AI-assisted verification.
– Case studies: Automotive ADAS, aviation autonomy, and robotics.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation.
Learning methods Lecture — Present theoretical underpinnings of software and AI testing, covering safety-critical standards and AI V&V challenges.
Lab works — Practical exercises in automated testing, simulation-driven validation, and robustness evaluation using Python/ROS/MATLAB.
Individual assignments — Develop and analyze test strategies, evaluate compliance with ISO/IEEE frameworks, and submit technical reports.
Self-learning — Review international standards, research literature, and case studies of AI validation in autonomous domains.
AI involvement AI tools can assist in generating test cases, simulating complex operational scenarios, and analyzing coverage gaps. Students must validate AI-generated results, maintain traceability, and document AI involvement transparently in compliance with academic ethics.
Recommended tools and environments
Verification and Validation focus
Relevant standards and regulatory frameworks
en/safeav/curriculum/softsys-m.1762247645.txt.gz · Last modified: 2025/11/04 09:14 by airi
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0