Abstract

The rapid evolution of large-scale TypeScript and React ecosystems introduces significant heterogeneity in how software quality is defined, measured, and enforced. Static analysis, type checking, testing, and runtime validation each capture partial aspects of software correctness, yet their fragmentation across toolchains results in semantic inconsistencies, duplicated configuration, and weak policy enforcement within continuous integration pipelines.

This article presents the Dynamic And Static Health (DASH) Framework, an architectural and methodological model designed to unify static and dynamic quality dimensions across JavaScript, TypeScript, and React environments. DASH formalizes software quality as a multi-dimensional vector of measurable signals spanning linting, typing, complexity, duplication, accessibility, test coverage, and performance. Each dimension includes explicit threshold definitions that describe what must hold true, independent of the tools enforcing it.

The framework introduces (1) a modular adapter layer translating heterogeneous outputs from ESLint, TypeScript, Vitest, and performance profilers into normalized machine-readable reports; (2) a policy-as-code engine that evaluates unified metrics against thresholds and enforces decisions in CI/CD pipelines; and (3) an incremental adoption strategy enabling safe deployment across hybrid monorepos without architectural rewrites.

Deployed within OVHcloud Manager projects under the Manager Control Tower team, DASH has harmonized quality governance across more than one hundred interdependent applications. Empirical results demonstrate reduced configuration drift, faster feedback cycles, and improved traceability of regressions. Beyond its engineering outcomes, DASH establishes a reproducible model for policy-driven quality reasoning in large-scale front-end systems.

Keywords: Software Quality Assurance, Static and Dynamic Analysis, Policy-as-Code, Continuous Integration, TypeScript, React.

Introduction & Problem Statement

The unification of software quality assurance across static and dynamic dimensions remains an unresolved challenge in large-scale front-end ecosystems. In modern TypeScript and React architectures, dozens of heterogeneous analyzers coexist: linters, compilers, type checkers, test runners, and performance profilers, each governed by its own semantics, configuration grammar, and reporting conventions. This fragmentation leads to systemic inefficiencies: duplicated configuration, inconsistent enforcement, and the absence of a reproducible definition of quality across continuous integration pipelines.

Within the OVHcloud Manager ecosystem, this challenge is amplified by scale and heterogeneity. The environment comprises over seventy-five production applications and more than one hundred and twenty shared modules, spanning both modern React and legacy AngularJS systems. All components coexist within a unified monorepo infrastructure, yet evolve under decentralized ownership, each team defining its own degree of TypeScript strictness, enabling or disabling ESLint and Prettier rules, and applying local overrides that may shadow or nullify the organization's root configuration. Over time, this autonomy has created significant configuration drift, policy divergence, and blind spots where local exceptions effectively disable shared governance.

In such environments, static quality (e.g., linting, typing, structural consistency) and dynamic quality (e.g., test coverage, runtime performance, accessibility) are maintained by isolated toolchains whose results cannot be compared or aggregated meaningfully. Their separation prevents any unified understanding of whether a system satisfies organizational standards, or how local deviations contribute to global degradation. Moreover, each tool upgrade or rule change propagates as a manual cascade of edits across multiple projects, creating a form of semantic coupling between the tool itself and the policy it enforces. The outcome is a fragile quality infrastructure, where consistency depends more on convention than on verifiable structure.

The absence of a shared quality ontology leads to tool-specific silos and opaque feedback loops, as illustrated in Figure 1:

┌───────────────────────────────────────────────┐
│        Unified Monorepo Infrastructure        │
│ (React ecosystem — 75+ apps, 120+ shared libs)│
│ ⚠️ Legacy AngularJS apps are excluded from DASH│
└───────────────────────────────────────────────┘

 (React + TS strict)         (React + TS loose)        (ESLint overrides)       (Legacy React)
┌───────────────────────┐    ┌────────────────────┐    ┌────────────────────┐    ┌─────────────────────┐
│ App A (TS: strict ON) │    │ App B (TS: loose)  │    │ App C (Local ESLint)│   │ Legacy React App    │
└──────────┬────────────┘    └──────────┬─────────┘    └──────────┬─────────┘    └──────────┬──────────┘
           │                             │                       │                       │
           │ Local configs + overrides   │ Local configs + loose │ Local rules override   │ Legacy configs
           ▼                             ▼                       ▼                       ▼
     ┌───────────┐                 ┌───────────┐           ┌───────────┐             ┌───────────┐
     │ TypeScript│                 │ ESLint    │           │ Vitest    │             │ Prettier  │
     └────┬──────┘                 └────┬──────┘           └────┬──────┘             └────┬──────┘
          │                             │                       │                          │
          └─────────────────────────────┼───────────────┬────────┼──────────────────────────┘
                                        ▼               ▼
                            ┌──────────────────────────────────────────────┐
                            │   Dispersed Logs and Outputs (Hidden)        │
                            │   - lint-report.json (local only)            │
                            │   - ts-results.html (not aggregated)         │
                            │   - coverage.xml, bundle.html (scattered)    │
                            │   → No unified observability or dashboards   │
                            └──────────────────────────────────────────────┘

NOTE: Before DASH, quality signals were isolated and invisible at scale.  
      Reports were generated per-app, using local rules and thresholds.  
      No centralized aggregation or semantic consistency existed across tools.

The framework addresses this gap. It proposes a formal abstraction in which software quality is modeled as a multi-dimensional vector, defined over measurable indicators that span both static and dynamic aspects of the codebase. Instead of embedding semantics within each tool, DASH externalizes them into a central policy definition that expresses what must hold true, through explicit thresholds and constraints, while delegating how those conditions are verified to specialized adapters. This separation enables coherent aggregation and evaluation, consistent reporting, and controlled tool evolution across a heterogeneous ecosystem.

In response to these structural limitations, DASH introduces a continuous quality observability layer that integrates heterogeneous tools under a shared semantic model and provides daily and weekly insights across the ecosystem.

┌──────────────────────────────────────────────────────────┐
│                 Applications & Modules                   │
│     (React ecosystem only — 75+ apps, 120+ shared libs)  │
│ ⚠️ Legacy AngularJS apps are detected but skipped safely │
└───────────────┬───────────────┬───────────────┬──────────┘
                │               │               │
                ▼               ▼               ▼
┌──────────────────────────────────────────────┐
│              DASH Adapters Layer             │
│ - ESLint Adapter (Static lint, a11y)         │
│ - TypeScript Adapter (Type coverage)         │
│ - Vitest Adapter (Dynamic test coverage)     │
│ - Performance Budgets Adapter (Vite bundles) │
│ - HTML / A11y Validators (semantic checks)   │
└───────────────┬───────────────┬──────────────┘
                │
                ▼
┌──────────────────────────────────────────────────────────────┐
│     Normalized Reports (Machine + Human Readable)            │
│  - JSON artifacts → daily aggregation                        │
│  - HTML dashboards → transparent visualization               │
│  → Unified quality visibility across all React modules       │
└──────────────┬───────────────────────────────────────────────┘
               │
               ▼
┌──────────────────────────────────────────────────────────────┐
│        DASH Policy & Metrics Evaluator                       │
│  - Global thresholds defined per metric (not per app)        │
│  - Aggregates S(t) (static) and D(t) (dynamic) vectors        │
│  - Produces normalized, interpretable quality dimensions     │
└──────────────┬───────────────────────────────────────────────┘
               │
               ▼
┌──────────────────────────────────────────────────────────────┐
│ Continuous Deployment (CD) Quality Monitoring Layer          │
│  - Daily run: aggregates reports across all React apps       │
│  - Weekly run: generates adoption reports per tool           │
│  - Sends Webex notifications (status + adoption summaries)   │
│  - Observability only — no gating or blocking enforcement    │
└──────────────────────────────────────────────────────────────┘

NOTE: DASH acts as a continuous quality observability framework.
It collects daily metrics and weekly adoption insights across the React ecosystem,
prioritizing transparency, comparability, and incremental governance evolution.

Figure 2 illustrates how the DASH architecture restructures this fragmented tool landscape into a unified, observable, and policy-driven quality layer.

In essence, DASH reframes quality from a fragmented, tool-specific activity into a semantics-driven governance layer, where rules, metrics, and thresholds are first-class citizens of the software architecture. The following sections detail the theoretical quality model, architectural design, and empirical evaluation of DASH within the OVHcloud Manager monorepo, demonstrating how structural abstraction and semantic consistency can re-establish determinism, reproducibility, and controlled evolution in large-scale front-end ecosystems.

Quality Model

DASH models software quality as a structured, multi-dimensional process rather than a single scalar indicator.

It formalizes the relationship between static properties (structural, compile-time, and configuration metrics) and dynamic properties (runtime, behavioural, and test-based metrics), measured continuously through automated adapters triggered within the Continuous Deployment (CD) workflow.

By design, DASH treats quality as both semantic and temporal: each measurable property is normalized, aggregated, and observed over time, enabling organizations to track the evolutionof conformance rather than the state of a single execution.

Static and Dynamic Spaces

DASH distinguishes two orthogonal measurement spaces:

S(t) = [ s₁(t), s₂(t), …, sₘ(t) ]   →  Static Quality Vector
D(t) = [ d₁(t), d₂(t), …, dₙ(t) ]   →  Dynamic Quality Vector

At each discrete time t (deployment cycle or daily run):

  • S(t) captures compile-time and structural attributes such as ESLint compliance, TypeScript strictness, duplication ratio, and type coverage.
  • D(t) captures execution-time characteristics such as test coverage, performance budgets, accessibility validation, and runtime profiling.

The complete quality state of an application aᵢ is:

Qᵢ(t) = [ Sᵢ(t), Dᵢ(t) ]

Each coordinate of Qᵢ(t) represents a specific and interpretable dimension of software quality that can be observed, compared, and trended across time.

Normalization

Every raw measurement mₖ(t) originates from a concrete adapter invoked during the CD pipeline.

For example:

  • Code Duplication Adapter → duplicated lines percentage,
  • Performance Budgets Adapter → bundle size relative to thresholds,
  • Tests Coverage Adapter → line and branch coverage,
  • TypeScript Coverage Adapter → typed identifiers ratio.

Because each tool emits results with different scales (bytes, ratios, percentages), DASH defines a normalization function fₖ that maps each metric into a dimensionless value in the interval [0, 1]:

xₖ(t) = fₖ(mₖ(t)) ∈ [0, 1]

where:

  • xₖ(t) = 0 → total non-conformance (metric violates all defined conditions),
  • xₖ(t) = 1 → full satisfaction (metric meets the target),
  • intermediate values → proportional compliance.

Normalized outputs are stored as machine-readable JSON artifacts and rendered as HTML dashboards (e.g., Code Duplication Report, Performance Budgets Report, Tests Coverage Report, Types Coverage Report).

These dual artifacts constitute DASH's observability layer, enabling both automated consumption and human inspection.

The normalized vector for application aᵢ at time t is:

Qᵢ(t) = [ x₁(t), x₂(t), …, xₖ(t) ]

Each coordinate corresponds to a concrete dimension of quality, and a regression in any xₖ(t) directly identifies the affected property and its deviation magnitude.

Aggregation Across Dimensions

While normalization produces independent compliance scores, DASH can optionally aggregate them to assess overall balance across dimensions.

A policy-defined aggregation function Φ combines normalized values xₖ(t) with weights wₖ representing organizational importance:

Q̄ᵢ(t) = Φ({xₖ(t)}, {wₖ})

🔵 Linear Aggregation (default):

Φ_lin = ( Σₖ wₖ · xₖ(t) ) / ( Σₖ wₖ )

This linear mean yields an interpretable weighted compliance index, suitable for daily dashboards and trend visualization. It assumes all dimensions contribute proportionally to perceived quality.

🔵 Nonlinear Aggregation (sensitivity-adjusted):

Certain dimensions (e.g., performance budgets, accessibility) exhibit nonlinear impact, small degradations near thresholds can be critical. To capture this, DASH supports a logarithmic variant:

Φ_nonlin = ( Σₖ wₖ · log(1 + α · xₖ(t)) ) / ( Σₖ wₖ )

where:

  • α > 1 amplifies penalty for regressions near compliance limits,
  • α < 1 smooths fluctuations near saturation.

In practice, Φ is used to:

  • Generate daily summaries for CD dashboards,
  • Feed Webex notifications highlighting regressions,
  • Provide aggregated indicators to phTool (Project Health Tool) for badges, historical analytics, and long-term trend visualization.

Transparency over compression: Aggregation in DASH is an analytical aid, not a substitute for raw dimensional data. The framework always retains full vectors Qᵢ(t) to preserve traceability, auditability, and semantic clarity.

Policy Evaluation Function

Once normalized metrics are collected, DASH applies a policy evaluation function P that maps each dimension to a categorical state (Pass, Warn, or Fail) based on its threshold pair (θₖ^warn, θₖ^pass):

Pₖ(xₖ) =
    Fail ,  if xₖ < θₖ^warn
    Warn ,  if θₖ^warn ≤ xₖ < θₖ^pass
    Pass ,  if xₖ ≥ θₖ^pass

The resulting policy state for an application is:

P(Qᵢ(t)) = [ P₁(x₁), P₂(x₂), …, Pₖ(xₖ) ]

Evaluations occur automatically after report generation in the CD pipeline, producing visual badges (✅ Pass, ⚠️ Warn, ❌ Fail) in dashboards and triggering Webex notifications summarizing results.

Governance consistency: Thresholds are defined per metric, not per application. This ensures consistent semantics, comparability across projects, and prevents local relaxation of standards. By enforcing a single policy space, DASH maintains centralized governance while supporting distributed execution.

Temporal Dimension and Prospective History

DASH analyses are executed periodically through the Continuous Deployment (CD) pipeline. At each cron execution, all adapters are re-triggered to produce up-to-date normalized metrics and reports.

However, the current implementation performs stateless evaluation, it computes quality vectors Qᵢ(t) independently at each run, without persisting or comparing historical data.

Formally, each metric can still be conceptualized as a time-dependent function:

xₖ : T → [0, 1]

where T represents the discrete sequence of analysis executions.

Yet, without persistent storage, DASH currently observes only the instantaneous state at time t, not its evolution.

Historical reasoning, such as tracking regressions, computing drift, or deriving adoption velocity, requires a persistent layer to store successive metric states:

Δxₖ = xₖ(t) − xₖ(t − Δt)

This capability will be introduced through phTool (Project Health Tool), an advanced aggregation system designed to:

  • store per-metric histories across executions,
  • visualize quality evolution over time,
  • assign badges and quality gates based on sustained conformance,
  • and compute adoption trends at both metric and ecosystem scale.

In the current phase, DASH provides snapshot observability rather than temporal analytics: it ensures that every execution produces consistent, comparable, and auditable quality data.

Time-series reasoning remains part of the framework's next evolutionary stage.

Interpretation

Through this formalization, DASH transforms quality from fragmented, tool-specific outputs into a coherent, semantic, and governance-oriented model.

It bridges the gap between daily engineering practice and organizational oversight by introducing a structured representation of quality that is both machine-interpretable and human-readable.

At its current stage, DASH provides consistent, reproducible snapshots of quality across all applications, ensuring that every CD execution yields a comparable and auditable view of the ecosystem's health.

Although temporal analytics (e.g., drift or velocity) are not yet computed, the framework establishes the necessary foundations for these capabilities to be integrated later through persistent tooling such as phTool.

Specifically, DASH provides:

  • Mathematical traceability: through its formalized mappings fₖ, Φ, and P, which define how raw metrics are normalized, aggregated, and evaluated;
  • Continuous observability: via automated, CD-driven adapters that generate structured JSON and HTML reports for each metric;
  • Governance consistency: by enforcing metric-level thresholds and shared policies across all projects;
  • Transparent, comparable dashboards: ensuring that quality data remains explainable, reproducible, and aligned with organizational standards.

DASH thus acts as a bridge between measurement and meaning, transforming raw tool outputs into a structured language of software health, scalable from individual metrics to ecosystem-wide observability.

Architecture and Implementation

Overview

The framework implements a three-layer architecture that transforms heterogeneous static analysis results into normalized, semantically interpretable quality data.

Its design isolates concerns between metric acquisition, execution orchestration, and report normalization, ensuring deterministic behavior and cross-tool interoperability:

┌──────────────────────────────┐
│   Static Analysis Layer      │  ← individual analyzers (duplication, coverage, perf, types…)
├──────────────────────────────┤
│   Dynamic Orchestration Layer│  ← CLI adapters, discovery, aggregation logic
├──────────────────────────────┤
│   Normalization & Reporting  │  ← unified JSON schema, HTML dashboards, thresholds
└──────────────────────────────┘

This layered structure allows multiple analyzers, each with distinct semantics and engines (e.g., jscpd, Vitest, type-coverage, vite-bundle-analyzer), to coexist under a shared governance model without fragmenting quality semantics or reporting formats.

Static Layer

The Static Layer constitutes the foundation of DASH. It hosts domain-specific adapters, each encapsulating a specialized analysis engine. These adapters act as emitters of normalized quality data, wrapping raw outputs into standardized JSON schemas.

Each adapter resides under:

manager-static-analysis-kit/src/adapters/

and produces deterministic outputs in directories of the form:

<metric>-reports/
└── <target>/
    ├── <metric>-report.json
    ├── index.html
└── <metric>-combined-report.json
└── <metric>-combined-report.html

Examples of implemented analyzers:

  • Code Duplication Adapter (code-duplication/): integrates jscpd, computing duplicated lines, tokens, and block counts per application or package.
  • Performance Budgets Adapter (perf-budgets/): integrates vite-bundle-analyzer, collecting per-asset weights (JS, CSS, HTML, images) and evaluating them against web medians.
  • Tests Coverage Adapter (tests-coverage/): aggregates Vitest or Jest coverage summaries across all modules.
  • Type Coverage Adapter (types-coverage/): uses type-coverage to quantify type annotation completeness.
  • HTML W3C and Accessibility Validation adapters: expose testing matchers (toBeValidHtml(), toBeAccessible()) that extend DASH semantics into the test runtime layer.

Each adapter normalizes its output through a common schema:

{
  "percentage": 82.6,
  "status": "green",
  "thresholds": { "green": 80, "orange": 60 },
  "worstFiles": [
    ["src/components/Table.tsx", { "percentage": 45.3 }]
  ]
}

This schema enforces structural isomorphism across all analyzers, independent of their raw data source, and enables unified downstream aggregation.

Dynamic Orchestration Layer

The Dynamic Layer constitutes the execution and aggregation backbone of DASH. It controls discovery, target resolution, process spawning, and dataset merging.

Each analyzer exposes a CLI entrypoint (manager-code-duplication, manager-types-coverage, etc.) that encapsulates this orchestration pipeline.

1️⃣ Discovery and Target Resolution:

DASH's discovery logic detects all analyzable entities (applications, packages, and libraries) through argument selectors:

--app <name>             # single app
--apps <list>            # multiple apps
--package <name>         # single package
--packages <list>        # multiple packages
--library <name>         # single library
--libraries <list>       # multiple libraries

2️⃣ Resolution process:

✔️ Input Parsing: The CLI parser distinguishes selector types and supports mixed invocation (e.g., --apps zimbra,container --libraries manager-ui-kit).

✔️ Path & Package Resolution: Each name maps to canonical roots:

  • manager/apps/ → applications
  • packages/manager/* → internal packages
  • shared libraries → e.g. manager-ui-kit, shell-client Both folder and package.json names are accepted.

✔️ Validation & Filtering: Invalid entries are logged but skipped. Execution proceeds if ≥ 1 valid target is found (CI resilience).

✔️ Exit Semantics: All invalid → exit 1; otherwise → exit 0, ensuring deterministic automation feedback.

💠 This flexible mechanism allows consistent quality evaluation across monorepo scopes while maintaining uniform governance semantics.

3️⃣ Execution and Data Collection:

For each resolved target, DASH spawns a subprocess via Node's spawnSync API to run the corresponding engine (e.g., jscpd, vite-bundle-analyzer, Vitest, type-coverage).

Each process emits raw JSON which is immediately normalized and persisted. The pipeline guarantees reproducibility by enforcing:

  • consistent field naming and rounding precision,
  • deterministic ordering of object keys,
  • and strict directory naming conventions.

Every execution thus produces a self-contained artifact tree, ensuring verifiable traceability across time and CI environments.

Normalization and Reporting

The Normalization and Reporting Layer is the final stage of DASH's analytical workflow.

Its primary purpose is to transform tool-specific data streams into standardized, deterministic, and human-auditable artifacts.

Unlike traditional static analysis aggregators, DASH does not rely on external databases or dashboards; instead, it synthesizes fully self-contained reports that can be read by humans and parsed by automation.

1️⃣ Aggregation and Schema Harmonization:

Each analyzer (duplication, performance, tests, types, accessibility, W3C) produces an independent JSON artifact per target under a consistent directory convention such as:

code-duplication-reports/zimbra/code-duplication-report.json
types-coverage-reports/manager-ui-kit/types-coverage-report.json

After execution, these JSON outputs are merged into a combined dataset at the analyzer root level. This merging process aligns heterogeneous metrics, percentages, byte sizes, or coverage ratios, under a shared structural schema:

{
  "app": "zimbra",
  "percentage": 87.3,
  "status": "green",
  "thresholds": { "green": 80, "orange": 60 },
  "worstFiles": [
    ["src/pages/UsersTable.tsx", { "percentage": 46.1 }]
  ]
}

Each numeric dimension emitted by an adapter (mₖ(t)) is normalized into a unitless conformance value using adapter-specific transformation functions fₖ:

xₖ(t) = fₖ(mₖ(t)) ∈ [0,1]
  • mₖ(t) = raw metric (e.g., duplicated lines %, JS bundle size KB, type ratio %)
  • fₖ = normalization rule (linear or threshold-based mapping)
  • xₖ(t) = normalized compliance (0 = non-conformance, 1 = full compliance)

These normalized signals compose the application quality vector:

Qᵢ(t) = [x₁(t), x₂(t), …, xₖ(t)]

Each coordinate corresponds to a measurable quality dimension (type coverage, duplication ratio, accessibility compliance, etc.), ensuring that all analyzers emit structurally compatible data ready for unified rendering.

The resulting combined files follow the naming convention:

<metric>-combined-report.json
<metric>-combined-report.html

These serve as the single source of truth for both automation (JSON) and visualization (HTML).

2️⃣ HTML Report Synthesis:

The final step transforms normalized JSON into fully static HTML dashboards, generated by a shared rendering engine located under src/renderers/html-dashboard.mjs.

This renderer applies a consistent visual and semantic structure across all analyzers:

  • Global summary banner: displays overall compliance rate and threshold color (🟢 green, 🟠 orange, 🔴 red)
  • Collapsible per-target panels: one per app, package, or library
  • Ranked "worst files" sections: surfaces local degradation hotspots
  • Inline tooltips: show full file paths and precise metric values

All dashboards are zero-dependency artifacts: no JavaScript bundles, stylesheets, or external assets are required. They can be served statically, archived, or attached directly in Continuous Deployment notifications.

Each analyzer thus emits a pair of synchronized artifacts:

<metric>-reports/
└── <target>/
    ├── <metric>-report.json
    ├── index.html
└── <metric>-combined-report.json
└── <metric>-combined-report.html

Together, they form DASH's observability layer:

  • the JSON artifacts feed automation pipelines and governance scripts,
  • while the HTML dashboards provide human-readable insight into the system's structural and behavioral quality.

The Normalization and Reporting layer turns diverse, ephemeral analysis outputs into structured, immutable, and interpretable quality evidence.

Its deterministic design guarantees that each run of DASH yields traceable, diffable, and auditable artifacts, forming the factual substrate upon which higher-level tools (such as phTool) can build longitudinal metrics, badges, and adoption analytics.

The complete runtime execution path of DASH, from CLI invocation to normalized report generation, is summarized in Figure 3. It illustrates how analyzers, orchestrators, and renderers interact during a typical analytical run:

┌───────────────────────────────┐
│       CLI Invocation          │
│ (manager-code-duplication …)  │
└──────────────┬────────────────┘
               │
               ▼
     Target Discovery & Resolution
               │
               ▼
   Engine Execution (jscpd / Vitest / type-coverage / vite-bundle-analyzer)
               │
               ▼
     Raw Metric Output (mₖ)
               │
               ▼
 Normalization Functions fₖ(mₖ) → xₖ ∈ [0,1]
               │
               ▼
 Aggregated Dataset → Combined JSON Report
               │
               ▼
 HTML Dashboard Rendering → index.html + combined-report.html

Each analyzer operates as an independent pipeline, orchestrated by the CLI layer and converging through normalization and reporting into deterministic, human-readable dashboards.

Implementation Footprint

📁 manager-static-analysis-kit/
│
├── src/
│   ├── adapters/                  ← Static Layer (metric-specific engines)
│   │   ├── code-duplication/
│   │   ├── perf-budgets/
│   │   ├── tests-coverage/
│   │   ├── types-coverage/
│   │   ├── html-a11y-validation/
│   │   └── html-w3c-validation/
│   │
│   ├── collectors/                ← Dynamic Orchestration Layer
│   │   ├── cli-runner.mjs
│   │   ├── discovery-utils.mjs
│   │   └── aggregation-utils.mjs
│   │
│   ├── renderers/                 ← Normalization & Reporting
│   │   ├── json-aggregator.mjs
│   │   ├── html-dashboard.mjs
│   │   └── templates/
│   │
│   ├── configs/                   ← Threshold & policy definitions
│   │   ├── code-duplication-config.ts
│   │   ├── perf-budgets-config.ts
│   │   ├── tests-coverage-config.ts
│   │   ├── types-coverage-config.ts
│   │   └── thresholds.json
│   │
│   └── utils/
│       ├── fs-utils.js
│       ├── log-utils.js
│       └── normalize-utils.js
│
└── bin/
    ├── manager-code-duplication
    ├── manager-perf-budgets
    ├── manager-tests-coverage
    ├── manager-types-coverage
    └── manager-static-analysis-kit

This architecture turns the static analysis ecosystem into a self-governing quality substrate. Each adapter independently produces measurable, machine-readable artifacts, while the orchestration and normalization layers provide coherence, transparency, and deterministic governance across the entire OVHcloud Manager monorepo.

Incremental Rollout and Team Adoption

The rollout of the framework followed a progressive, adaptive strategy designed to ensure adoption without disruption. Instead of a single, monolithic activation, DASH was deployed incrementally across the OVHcloud Manager ecosystem, aligning with each team's maturity and technological stack.

Progressive Activation Model

DASH was introduced through a phased adoption sequence:

  • Pilot stage: Integration began with a subset of React applications (zimbra, container, dedicated) to validate correctness and determinism of the CLI adapters (manager-code-duplication, manager-tests-coverage, manager-types-coverage, etc.). This phase verified reproducibility of JSON outputs, visual accuracy of HTML dashboards, and stability of normalization logic.
  • Progressive expansion: After the pilot's success, analyzers were extended to additional domains, duplication, ESLint, SWC migration, TypeScript coverage, W3C and accessibility (A11y).
  • Full coverage: The system now spans all major applications and libraries under manager/apps/ and packages/manager/*, providing ecosystem-wide visibility into quality, modernization, and migration status.

This strategy allowed teams to benefit from DASH's insights immediately without waiting for full ecosystem readiness. It also ensured that early feedback refined configuration defaults, threshold definitions, and dashboard readability before enterprise-wide rollout.

Governance Consistency and Configurability

Governance consistency is achieved through a central configuration registry located in:

src/configs/
├── thresholds.json
├── code-duplication-config.ts
├── tests-coverage-config.ts
├── types-coverage-config.ts
└── perf-budgets-config.ts

All thresholds are defined per metric, not per application, ensuring that comparisons remain meaningful across teams.

Teams can propose configuration adjustments via controlled pull requests, which are validated automatically during CD analysis runs.

This hybrid governance model enables incremental and autonomous adoption within a unified semantic framework.

Observability Without Enforcement

DASH adopts a non-blocking quality model. Analyses are observational and informative, pipelines never fail on quality degradation.

Instead, deviations trigger:

  • Colored thresholds in dashboards (🟢 green, 🟠 orange, 🔴 red),
  • Webex notifications summarizing results and breaches,
  • Static HTML reports automatically published to the internal dashboard service.

This approach favors behavioral change through visibility, not coercion.

Teams retain control of their release cadence while progressively converging toward the defined quality baseline.

Weekly Adoption Analysis Reports

In addition to per-run analysis reports, we created a dedicated adoption monitoring subsystem.

A weekly Continuous Deployment (CD) task scans the entire ecosystem to evaluate tool adoption progress, such as completion of TypeScript migration, ESLint modernization, SWC replacement, A11y validation, or PNPM transition.

Each iteration:

  1. Executes all analyzers in "adoption mode", scanning every app's configuration and dependencies.
  2. Aggregates completion metrics (✅ done, ⚠️ partial, 📝 todo).
  3. Generates a centralized Migration Status Dashboard, published as an HTML artifact.
  4. Sends automated Webex notifications summarizing the overall migration state, allowing leadership and teams to monitor modernization velocity without manual intervention.

These dashboards are fully self-contained and refresh weekly, ensuring that adoption tracking remains synchronized with the latest repository state.

Organizational Impact

This incremental rollout strategy transformed DASH from a static analysis toolkit into a living ecosystem observability system:

  • Quality and modernization status are visible at all times across all projects.
  • Teams actively consult dashboards to identify migration gaps and prioritize actions.
  • Weekly adoption reports create an institutional rhythm of continuous improvement.
  • Governance moved from hidden compliance to shared transparency, making software quality observable, comparable, and actionable.

Quick Start

DASH can be executed directly using the Manager Static Analysis Kit, available on npm as @ovh-ux/manager-static-analysis-kit.

It provides immediate access to all analyzers and validation suites used in the unified quality pipeline.

Installation

Install once from the monorepo root or any individual app:

yarn add -D @ovh-ux/manager-static-analysis-kit

Then invoke the main entrypoint to launch all quality analyzers:

yarn manager-static-dynamic-quality-checks

Each analyzer runs independently but shares the same output structure and deterministic behavior.

Supported CLIs

The kit exposes four operational analyzers, each responsible for one dimension of software quality:

manager-code-duplication     → detects duplicated code across apps and libs
manager-perf-budgets         → evaluates bundle and asset sizes
manager-tests-coverage       → aggregates Vitest/Jest test coverage
manager-types-coverage       → reports TypeScript annotation completeness

All CLIs support the same targeting flags:

--app / --apps
--package / --packages
--library / --libraries

This uniform syntax allows identical invocation patterns across analyzers, ensuring full automation compatibility.

Example Commands

Typical execution scenarios:

# Analyze a single app
yarn manager-code-duplication --app zimbra

# Analyze multiple apps
yarn manager-code-duplication --apps container,zimbra

# Analyze packages by name
yarn manager-code-duplication --packages @ovh-ux/manager-zimbra-app,@ovh-ux/manager-pci-workflow-app

# Analyze shared libraries
yarn manager-code-duplication --libraries manager-ui-kit,shell-client

Mixed valid and invalid inputs (e.g., --apps zimbra,unknown-app,container) are tolerated: valid modules are analyzed, and invalid ones are logged and skipped without interrupting execution.

If all provided targets are invalid, the process exits with code 1. Otherwise, it always exits with 0 — a key guarantee for CI resilience.

CLI Validation Suite

Each CLI includes automated runtime tests that simulate real-world usage and verify:

  • Correct parsing of --app, --package, and --library arguments
  • Accurate module discovery and path resolution
  • Expected exit semantics and error handling
  • Behavior consistency across mixed valid/invalid targets

To execute validation tests locally:

yarn static-dynamic-quality-check-tests

These tests ensure every analyzer behaves deterministically across environments and input combinations.

Runtime Requirements

Each analysis type has minimal prerequisites to guarantee meaningful output:

Type Coverage     → requires tsconfig.json (TypeScript projects only)
Tests Coverage    → requires Vitest or Jest configuration and test files
Performance Budgets → requires vite.config.[ts|js] for build metrics
Code Duplication  → processes React-based sources (.js, .ts, .tsx)

Non-React or legacy (AngularJS) projects are automatically excluded, but valid React modules continue executing normally, ensuring partial success even in hybrid monorepos.

Output Structure

After execution, each analyzer writes normalized reports following a common structure:

<metric>-reports/
└── <target>/
    ├── <metric>-report.json
    ├── index.html
└── <metric>-combined-report.json
└── <metric>-combined-report.html

Both the JSON and HTML artifacts are deterministic — identical runs always produce identical outputs. The HTML dashboards can be opened directly in any browser for inspection.

Limitations and Future Improvements

Current Limitations

Despite its structural robustness and deterministic design, DASH presents several operational and organizational limitations intrinsic to its incremental adoption model:

1️⃣ Incremental Adoption Pace: DASH was deliberately designed for progressive integration across teams. However, this autonomy leads to heterogeneous adoption velocity. Teams onboard the framework according to their release calendars and migration priorities, which slows the global convergence toward unified quality governance. Some modules may continue using outdated or partial setups for extended periods.

2️⃣ Non-Enforcing Governance Model: Currently, DASH operates in a non-gating mode. Quality evaluations are observational only, they generate reports, alerts, and dashboards, but do not block CI/CD pipelines. While this ensures developer autonomy and ecosystem stability, it also limits enforcement capability. Standards can still be bypassed locally, and corrective action depends on voluntary adoption rather than structural enforcement.

3️⃣ Static Temporal Scope: Although CD jobs generate daily quality reports and weekly adoption summaries, these analyses remain snapshot-based. DASH does not yet maintain historical series or time-evolving models of conformance and drift. Comparing current and past results requires manual correlation of archived artifacts.

4️⃣ Partial Ecosystem Coverage: Legacy AngularJS applications and non-React modules are outside the analytical perimeter. They are detected but skipped automatically. As a result, the governance view is currently limited to the React and TypeScript ecosystem.

5️⃣ Limited Architectural Metrics: DASH's current adapters focus on code-level metrics (linting, typing, duplication, coverage, performance). They do not yet assess architectural quality (modularity, coupling, dependency graph structure, maintainability index). This restricts the analytical depth to local quality attributes rather than systemic software health.

Planned Enhancements

DASH's roadmap aims to evolve the framework into a comprehensive software quality observability platform, with several extensions under active design:

1️⃣ Integration with Project Health Tool (phTool): DASH will be connected to a higher-level platform named phTool, designed for cross-project governance. This integration will provide:

  • Historical trend visualizations of quality metrics,
  • Quality drift analytics,
  • Automatic scoring and badge generation,
  • Real-time adoption dashboards.

phTool will transform DASH from a reporting framework into a continuous governance substrate capable of measuring both technical conformance and organizational adoption velocity.

2️⃣ Expansion of Analytical Dimensions: Future adapters under consideration include:

  • Code Health Meter: quantifying complexity, maintainability, and Halstead metrics.
  • Dependency Graph Analyzer (Madge Integration): mapping circular dependencies and architectural violations.
  • Modularity and Coupling Evaluators: computing package cohesion, cross-boundary imports, and architectural drift indicators.

In its current form, DASH provides a reproducible, organization-wide substrate for static and dynamic quality analysis, balancing precision with flexibility. Future iterations will extend its reach from reporting to prediction, from observation to governance, and from code-level metrics to architectural intelligence, progressively shaping a unified, evolvable foundation for software quality assurance at scale.

Conclusion

The Dynamic And Static Health (DASH) framework represents a practical and conceptual step toward restoring structure and determinism in large-scale front-end ecosystems.

By abstracting quality as a formal, multidimensional process, rather than as a set of fragmented tool outputs, DASH redefines how software organizations observe, measure, and govern code health.

Within the OVHcloud Manager ecosystem, it demonstrated that consistent semantics, normalized metrics, and deterministic reporting can emerge from diversity rather than uniformity.

Through the separation of semantics (policy) from mechanics (tools), DASH enabled heterogeneous analyzers: ESLint, TypeScript, Vitest, and others, to coexist under a single quality model without sacrificing flexibility or autonomy.

Its layered architecture transformed scattered reports into reproducible, interpretable artifacts, forming a shared foundation for long-term software governance.

Yet the broader contribution of DASH extends beyond automation: it establishes a philosophy of reversible modernization, a way to evolve systems incrementally, transparently, and without systemic fragility. It proves that quality, when modeled semantically, can unify teams, codebases, and processes without coercion.

Future work will expand this foundation toward architectural intelligence, connecting DASH to phTool for continuous governance, adoption analytics, and predictive insights.

In doing so, the framework will evolve from static analysis to living, adaptive quality learning, a bridge between engineering precision and organizational resilience.

Thank you for joining me on this journey; I hope it inspires new ways to modernize complex systems, through structure, clarity, and the quiet power of reversible design. 💙

Want to Connect? 
You can find me at GitHub: https://github.com/helabenkhalfallah