Tool 10 · Algorithm Deep Dive
Graph Neural Network + Process Mining
Impact is inherently a graph problem — if you change object A, which other objects feel it? A "where-used" list shows direct dependencies but misses indirect ripple effects. A developer changes one line in a BAPI; where-used shows 12 dependencies. Actual impact: 47 downstream programs break at month-end.
Typed dependency graph built from SolMan/Signavio data. Node2Vec embeddings capture structural similarity — finding objects that "look like" the changed one in the dependency web. Weighted random walk propagates impact through the graph, decaying with hop distance and edge-type weight.
Typed graph: requirements ↔ objects ↔ modules ↔ compliance packs. ~50k nodes on mid-size program. Edge types: CALLS, READS_FROM, INHERITS, IMPLEMENTS, USES_DATA_FROM.
128-dim embeddings trained on full graph. Captures structural role: "What else looks like this object in the dependency web?"
10,000 weighted random walks from changed node. Transition probabilities weighted by edge type importance.
Aggregated visit counts normalized to 0-100 impact score per downstream object.
Changed Node: B → Ripple effects to A, C, D, E
┌─────────────────────────────────────────────────────────────────────────────────────────┐
│ CHANGE IMPACT ANALYZER PIPELINE │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ │
│ │ INPUT: │ "Change tax calculation logic in BAPI_TAX_CALC" │
│ │ Change Text │ │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ STEP 1: NER TARGET EXTRACTION │ │
│ │ │ │
│ │ Fine-tuned NER extracts target object from change text: │ │
│ │ "BAPI_TAX_CALC" → Object Type: BAPI, Module: FI │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ STEP 2: BUILD DEPENDENCY GRAPH (Nightly) │ │
│ │ │ │
│ │ Import from SolMan / Signavio: │ │
│ │ │ │
│ │ Nodes: Programs, Function Modules, Classes, Tables, CDS Views, │ │
│ │ BAPIs, Requirements, Modules, Compliance Packs │ │
│ │ │ │
│ │ Edges (with type weights): │ │
│ │ ┌──────────────────┬──────────┐ │ │
│ │ │ Edge Type │ Weight │ │ │
│ │ ├──────────────────┼──────────┤ │ │
│ │ │ READS_FROM │ 1.0 │ (strongest dependency) │ │
│ │ │ CALLS │ 0.8 │ │ │
│ │ │ IMPLEMENTS │ 0.7 │ │ │
│ │ │ USES_DATA_FROM │ 0.6 │ │ │
│ │ │ INHERITS │ 0.5 │ │ │
│ │ └──────────────────┴──────────┘ │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ STEP 3: Node2Vec EMBEDDING (Nightly) │ │
│ │ │ │
│ │ Train Node2Vec on full graph: │ │
│ │ • Random walks generate "sentences" of nodes │ │
│ │ • Skip-gram model learns 128-dim embeddings │ │
│ │ • Similar structural roles → similar embeddings │ │
│ │ │ │
│ │ Cosine similarity finds "structurally similar" objects │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ STEP 4: WEIGHTED RANDOM WALK PROPAGATION │ │
│ │ │ │
│ │ Start at changed node (BAPI_TAX_CALC) │ │
│ │ Run 10,000 random walks with: │ │
│ │ • Transition probability ∝ edge_weight │ │
│ │ • Restart probability = 0.15 (teleport back to source) │ │
│ │ • Decay factor = 0.85 per hop │ │
│ │ │ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │ │
│ │ │ Walk 1: BAPI_TAX_CALC → READS_FROM(1.0) → TAX_TABLE │ │ │
│ │ │ → CALLS(0.8) → TAX_REPORT → CALLS(0.8) → FI_POST │ │ │
│ │ │ │ │ │
│ │ │ Walk 2: BAPI_TAX_CALC → CALLS(0.8) → VALIDATION_FUNC │ │ │
│ │ │ → (restart) → BAPI_TAX_CALC → ... │ │ │
│ │ └─────────────────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ After 10,000 walks, count visits per node → Impact Score │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ STEP 5: RIPPLE LIST & OUTPUT │ │
│ │ │ │
│ │ Normalize visit counts to 0-100 Ripple Score: │ │
│ │ │ │
│ │ ┌─────────────────────────────┬───────────────┬────────────────┐ │ │
│ │ │ Object │ Ripple Score │ Impact Path │ │ │
│ │ ├─────────────────────────────┼───────────────┼────────────────┤ │ │
│ │ │ TAX_TABLE │ 98 │ Direct READS │ │ │
│ │ │ TAX_REPORT │ 76 │ CALLS → CALLS │ │ │
│ │ │ FI_POSTING_PROGRAM │ 64 │ CALLS → CALLS │ │ │
│ │ │ VALIDATION_FUNC │ 52 │ Direct CALLS │ │ │
│ │ │ MM_TAX_INTERFACE │ 31 │ CALLS → READS │ │ │
│ │ └─────────────────────────────┴───────────────┴────────────────┘ │ │
│ │ │ │
│ │ Output: Ripple list + Recommended testing scope │ │
│ └──────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────────────────┘
Tool 10 prevents surprises during upgrades and custom development.
Current Random Walk treats all edge types with fixed weights. R-GCN learns edge-type-specific weight matrices:
Expected improvement: 81.4% recall → 88-91% recall at 90% precision.
| Metric | Value | Benchmark |
|---|---|---|
| Recall @ 90% Precision | 81.4% | 300-change backtest |
| Mean Average Precision (MAP) | 0.76 | Ranked impact list quality |
| False Positive Rate | 9.2% | Objects flagged but not actually impacted |
| Graph Build Time | ~45s | 50k node graph |
| Query Latency | 120ms | Impact computation |
Result: Month-end closing succeeds; avoided 2-week delay and $80k rework.