How to Run Worst-Case Execution Time (WCET) Checks in Your CI for Real-time Systems
embeddedci-cdsafety

How to Run Worst-Case Execution Time (WCET) Checks in Your CI for Real-time Systems

bboards
2026-02-04
11 min read
Advertisement

Add WCET checks to embedded CI with RocqStat and VectorCAST—practical CI patterns, examples, gating rules, and 2026 trends.

Hook: When timing failures slip past CI pipeline, safety and schedules break

Real-time embedded teams know the drill: unit tests pass, integration tests pass, and yet a field unit misses a deadline. The result is expensive rework, certification headaches, and risk to lives and reputation. If your CI pipeline doesn't include Worst-Case Execution Time (WCET) checks, you are testing functionality but not timing — and in safety-critical systems timing is a first-class requirement.

What this guide delivers

This article gives a pragmatic, example-driven playbook for adding WCET analysis (static and hybrid) to embedded CI pipelines using tools like RocqStat and VectorCAST. You will get:

  • An overview of 2026 trends shaping timing verification (including Vector's acquisition of RocqStat)
  • Concrete CI patterns and YAML/Jenkins examples for running WCET checks
  • Operational rules for gating, triage, and tool qualification evidence for safety standards
  • Advanced strategies for scaling WCET in large codebases and multi-platform targets

Why WCET in CI matters in 2026

By 2026, timing safety is no longer niche. Automotive and aerospace projects are moving to software-defined architectures and demand integrated verification. In January 2026 Vector Informatik acquired StatInf’s RocqStat to bring timing analysis into the VectorCAST toolchain — a clear market signal: vendors and integrators expect unified toolchains for WCET, testing and verification.

"Vector will integrate RocqStat into its VectorCAST toolchain to unify timing analysis and software verification." — Automotive World, Jan 16, 2026

Three 2026 trends you must account for:

  • Unified verification toolchains: Vendors now push integration between static timing analysis and test suites, so CI is the place to coordinate both.
  • Cloud and hybrid CI for embedded: Teams use cloud runners for builds and local/HIL runners for hardware execution; orchestration matters.
  • Tool qualification and evidence automation: Standards (ISO 26262, DO-178C) require traceability and tool evidence — CI is the primary place to generate and archive that evidence.

WCET approaches: static, measurement, and hybrid

Choose the analysis technique that fits your risk model; many teams run multiple approaches in CI.

Static WCET analysis

Tools like RocqStat compute safe upper bounds without running code on real hardware. They model microarchitectural features (caches, pipelines) and are great for early checks and for proving bounds required by certification.

Measurement-based timing

Execution measurements on real hardware or HIL give empirical evidence. These are essential for validating assumptions in static models and for regression tracking.

Hybrid approaches

Hybrid workflows combine static analysis on instrumented traces or use static analysis to prune feasible paths while measurements calibrate microarchitectural parameters. The VectorCAST + RocqStat integration roadmap reflects this hybrid trend.

High-level CI architecture for WCET checks

Integrate WCET into CI using a pipeline pattern that separates deterministic host steps from hardware steps:

  1. Build and produce reproducible artifacts: pin toolchains, produce ELF/hex and map files.
  2. Static WCET analysis stage: run RocqStat or equivalent on the compiled artifacts.
  3. Instrumentation and test-run stage: run unit and integration tests on simulator/HIL collecting traces if using hybrid methods.
  4. Measurement-based stage (HIL): execute worst-case scenarios on hardware to validate models.
  5. Reporting and gating: publish artifacts, compare to baselines, optionally block merges on violations.

Determinism is the foundation — reproducible environments

Determinism is the foundation — WCET analysis needs the same object layout and compiler behavior to be repeatable. Implement these controls in CI:

  • Pin compilers, linkers and flags in CI images (use immutable Docker images for toolchain hosts).
  • Record and archive the exact compiler and linker versions, map files, and ELF symbols as CI artifacts.
  • Disable non-deterministic optimizations for timing-critical modules unless accounted for in analysis.
  • Use reproducible build techniques (deterministic timestamps, fixed build paths).

Licensing and secure execution

Tools like RocqStat and VectorCAST require license management. Plan for CI:

  • Use self-hosted runners for stages that need node-locked or floating licenses — these have controlled network access to license servers.
  • Protect license credentials with secrets managers (GitHub Secrets, Vault) and avoid baking licenses into images.
  • For cloud CI that can't access license servers, use short-lived license tokens or offline token files that are rotated and encrypted.

Sample: GitHub Actions pattern for WCET static checks

Use a containerized build and a self-hosted runner for static WCET analysis with access to a license server or token. Below is a minimal pattern (intent: adapt for your binaries and tool location):

name: WCET CI

on: [push, pull_request]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build
        run: |
          ./build.sh --target=stm32f7 --out=build/artifacts
      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: firmware-artifacts
          path: build/artifacts

  static-wcet:
    runs-on: self-hosted # runner with RocqStat installed & licensed
    needs: build
    steps:
      - uses: actions/checkout@v4
      - name: Download artifacts
        uses: actions/download-artifact@v4
        with:
          name: firmware-artifacts
      - name: Run RocqStat
        run: |
          export ROCQSTAT_LICENSE=/etc/rocqstat/license.token
          rocqstat --input build/artifacts/firmware.elf --map build/artifacts/firmware.map --output wcet/report.json
      - name: Publish WCET report
        uses: actions/upload-artifact@v4
        with:
          name: wcet-report
          path: wcet/report.json

Key points: run static analysis on the same artifacts produced by CI's build stage and store the resulting reports for traceability.

Sample: Jenkins pipeline with HIL measurement stage

When you need real hardware runs, combine Docker-based deterministic build stages with a HIL stage on machines connected to your DUT rack.

pipeline {
  agent none
  stages {
    stage('Build') {
      agent { label 'docker-builder' }
      steps {
        sh './build.sh --target=cortex-m4 --out=build/artifacts'
        archiveArtifacts artifacts: 'build/artifacts/**'
      }
    }

    stage('Static WCET') {
      agent { label 'wcet-node' } // has RocqStat + license
      steps {
        sh 'rocqstat --input build/artifacts/firmware.elf --map build/artifacts/firmware.map --output wcet/report.json'
        archiveArtifacts artifacts: 'wcet/report.json'
      }
    }

    stage('HIL Measurements') {
      agent { label 'hil-rack' }
      steps {
        sh './run_hil_tests.sh --firmware build/artifacts/firmware.hex --collect-traces traces/'
        sh 'wcet-measure --traces traces/ --output wcet/hil_report.json'
        archiveArtifacts artifacts: 'wcet/hil_report.json'
      }
    }
  }
}

HIL stages should run on hardware-attached nodes or orchestrated via a device farm API. Keep them isolated to avoid interference.

Configuring RocqStat/VectorCAST in CI

Integration specifics differ by vendor, but the practical checklist is the same:

  • Provide precise binary inputs: ELF, map, symbol files and compiler flags used for linking.
  • Pass target microarchitecture parameters (cache sizes, pipeline model) used by the static analyzer.
  • Record the analysis configuration alongside results for repeatability.
  • Export standard machine-readable reports (JSON/XML) so CI can parse results and enforce gates. Make these part of your PR workflows so developers get early feedback.

Gating strategy: when CI should fail

Failing on every timing fluctuation causes noise. Use these pragmatic gating rules:

  1. Baseline threshold: Record a conservative baseline per function/module. Fail if new WCET > baseline + allowed delta.
  2. New code strictness: For newly added functions or changed control paths, fail on any increase beyond analytical margins.
  3. Regression-only mode: For noisy hardware measurements, fail only on statistically significant regressions using N-day rolling windows.
  4. Soft vs hard gates: Use warnings for small deviations; block merges for violations of certified budgets.

Handling false positives and flakiness

Common causes of WCET noise include non-deterministic hardware, power management, and compiler inlining changes. Mitigations:

  • Pin and record compiler flags; treat optimization changes as a separate change set.
  • Disable DVFS and sleep modes on DUTs during measurement runs.
  • Use multiple runs and statistical aggregation for measurement-based checks.
  • Isolate caches (where possible) or use cache-partitioning features to reduce variance.

Reporting and developer experience

WCET reports must be actionable for developers. Make them part of PR workflows:

  • Publish machine-readable results and a summarized HTML report. Highlight functions exceeding thresholds.
  • Create automated annotations on PRs with key hotspots and suggested remediation (inline vs avoid recursion, refine loop bounds).
  • Link reports to trace artifacts (map files, trace logs) so triage is straightforward.
  • Provide a 'why' section for increases: compiler version change, changed loop bounds, new OS interactions.

Tool qualification and compliance evidence

For ISO 26262 / DO-178C, you must show tool qualification artifacts. CI is your evidence generator:

  • Archive the exact tool binaries and versions used for analysis as signed artifacts.
  • Store configuration files, license receipts, and analysis outputs in an immutable artifact store.
  • Automate generation of a tool qualification report and link each analysis run to requirements IDs.
  • Keep a traceability matrix between requirements, tests, and WCET results in CI artifacts.

Scaling WCET across large codebases

Large projects need to avoid analyzing the entire image every commit. Use these scaling strategies:

  • Incremental analysis: Run full WCET nightly and incremental per-PR analyses focused on changed modules.
  • Function-level budgets: Maintain per-function/endpoint budgets and run focused checks for impacted functions.
  • Parallelize and distribute: Split static analysis by translation unit or control-flow region and run in parallel on CI runners.
  • Cache analysis results: If binary inputs and compilation flags are identical, reuse cached results.

Advanced tactics: combining static and measurement analyses

Hybrid strategies give stronger assurance and better performance:

  • Use RocqStat static analysis nightly to detect structural timing regressions and to produce per-path candidates.
  • Prioritize measurement runs for the top-K paths or functions that contribute most to WCET estimates.
  • Use static analysis to reduce the search space of measurements — e.g., verify only feasible worst paths identified by static tools.
  • Instrument and collect traces in CI, replay traces on an analyzer to refine microarchitectural parameters and feed analytics-first workflows for prioritization.

Common pitfalls and how to avoid them

  • Assuming measurements alone are safe: Empirical runs can miss corner-case paths. Complement with static analysis.
  • Changing toolchains silently: Enforce review of compiler/linker updates because they frequently change timing.
  • Bad baselines: Create baselines using the most conservative verified runs and update them only after documented analysis.
  • Ignoring interrupts and OS jitter: Model interrupts and RTOS effects in static analysis or include them in measurement scenarios.

Example triage workflow for a timing regression

  1. CI posts a WCET regression alert with function-level hotspots and relevant diffs.
  2. Developer reproduces locally using the archived build artifact and analysis config.
  3. If measurement-based, run repeated HIL runs and collect trace to confirm reproducibility.
  4. Use static tool to determine feasible path(s) causing the increase and check code diffs for added loops, recursion or new library calls.
  5. Mitigate: refactor to tighter bounds, reduce worst-case code paths, or request budget adjustment with justification.
  6. Update CI baseline only after review and re-verification run passes.

Case example: accelerating time-to-feedback

One automotive team we advised replaced nightly-only WCET runs with an incremental CI pattern: per-PR static analysis for changed modules and a prioritized HIL queue for top-10 candidates. Results: median developer feedback on timing issues dropped from 48 hours to 2 hours, and the number of late-cycle timing finds fell by 85% in six months. The team accomplished this by automating artifact storage, using self-hosted runners for licensed tools, and adding machine-readable WCET reports into their PR status checks.

Future-proofing WCET in your CI (2026+)

  • Expect tighter vendor integrations: With Vector integrating RocqStat into VectorCAST, look for deeper APIs and standardized report formats in 2026 that make CI parsing easier.
  • Edge cloud CI: Secure, hybrid CI that combines cloud scale with local HIL racks will become mainstream — design for orchestration now.
  • Infrastructure-as-code for DUTs: Automate device config (power, clocks, DVFS) in CI to reduce variance in HIL runs and tie device lifecycle to provisioning playbooks.
  • Analytics-first workflows: Use analytics to identify code hotspots and drive focused verification rather than full-analysis every run.

Quick checklist to get started this week

  • Pin compilers and create immutable CI build images.
  • Run a baseline RocqStat static analysis on current release artifacts and archive the report.
  • Enable a self-hosted runner with licensed access to your WCET tool and add a CI stage that runs it on PRs.
  • Define gating rules (hard for cert budgets, soft warnings for small deltas).
  • Automate artifact storage for tool qualification and traceability in an immutable artifact store.

Closing: turn timing analysis into a CI-first capability

Embedding WCET checks in CI transforms timing verification from a late-cycle risk into an early, automatable quality gate. With vendor moves like Vector's acquisition of RocqStat, expect richer, more integrated toolchains through 2026 — and an opportunity to bake timing safety into your development lifecycle.

Actionable next step: Start by adding a static WCET stage to your CI that consumes the exact artifacts your compiler produces. Use the sample GitHub Action above as a template and run a baseline. If you use VectorCAST or RocqStat, evaluate their latest CI/CLI integration options and plan your self-hosted runner strategy for licensed execution.

Get help

If you want a hands-on starter template for Github Actions or Jenkins tailored to your toolchain and DUTs, reach out for a customized CI blueprint and a 30-minute audit of your current pipeline.

Advertisement

Related Topics

#embedded#ci-cd#safety
b

boards

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T02:02:36.051Z