Standalone cluster workstation

Physics and Pipeline Review

Build passed Demo path validated Diagnostic inference only

ShearFrac cluster efficiency analysis

A pressure-budget review tool for imported stage telemetry.

This package explains how the app moves from telemetry intake to pressure decomposition, inferred active openings, cluster-efficiency bands, and review-queue decisions. It is designed to help a reviewer inspect the logic quickly without treating the output as direct downhole truth.

Primary workflow
4 steps
Intake, queue, detail, settings.
Hot path
1 Hz
Canonical review cadence for stage telemetry.
Model type
Budget
Pressure allocation before opening inference.
Output status
Inferred
Not exact perforation or downhole truth.

Operator path

From imported telemetry to stage review

The application is organized as a workstation, not a general dashboard. The user imports telemetry, runs the pressure-budget engine, reviews the queue, and opens a focused stage detail when the queue flags uncertainty or uneven acceptance.

Validated demo path
flowchart LR
  A[ShearStream telemetry] --> C[Intake normalization]
  B[FracBrain stage metadata] --> C
  D[CSV or manual fallback] --> C
  C --> E[Stage set and configured well]
  E --> F[Worker-backed pressure-budget engine]
  F --> G[Review Queue]
  G --> H[Stage Detail pressure budget]
  H --> I[Model Settings only when assumptions need tuning]
          
  1. Intake

    Preserves pressure, slurry rate, proppant concentration, friction reducer telemetry, stage metadata, casing geometry, and completion inputs.

  2. Normalize

    Filters invalid rows, applies supported unit conversion, keeps missing required fields as missing instead of quietly forcing zeros.

  3. Compute

    Delegates the pressure-budget solve to the canonical engine through the worker path so UI and worker math do not fork.

  4. Review

    Ranks stages by attention state, uncertainty width, uniformity, and residual behavior before the reviewer opens detail.

Review posture: the queue should lead with evidence and uncertainty, not just a single efficiency value. That keeps the product honest when telemetry quality, unit hints, or pressure-budget assumptions are imperfect.

Budget equation

The solver decomposes measured pressure before inferring openings

The app separates hydrostatic contribution, pipe friction, near-wellbore/tortuosity allocation, and clean perforation residual before solving for active perforation behavior.

Pressure-budget model
flowchart TD
  P[Measured treating pressure] --> R[Residual budget]
  ISIP[Dynamic ISIP anchor] --> R
  H[Proppant hydrostatic term] --> R
  PIPE[Segmented pipe friction] --> R
  NWB[NWB and tortuosity allocation] --> R
  R --> PERF[Clean perforation residual]
  PERF --> SOLVE[Invert orifice relation]
  SOLVE --> ACTIVE[Soft active perforations]
  ACTIVE --> CLUSTER[Inferred active clusters]
  CLUSTER --> CE[Cluster efficiency and uniformity]
  CE --> BAND[Lower and upper confidence band]
          

Friction pressure

frictionPressure = pressurePSI - effectiveIsip - propHydrostatic

Clean perf residual

perfFriction = frictionPressure - pipeFriction - nwbTortuosity

Forward pressure check

calcP = pipeFriction + nwbTortuosity + calcPerfFrict(...) + effectiveIsip + propHydrostatic

Perforation friction

perfFriction = 0.2367 * rate^2 * fluidDensity / (perfDiam^4 * numPerfs^2 * dischargeCoef^2)

Opening inference

perfsCalculated = sqrt( 0.2367 * rate^2 * fluidDensity / (perfDiam^4 * perfFriction * dischargeCoef^2) )

Claim control

Outputs are review-grade inference, not exact live truth

The pressure budget is materially stronger than a static friction guess, but the interface should continue to label the outputs as inferred diagnostics until calibration and promotion gates are satisfied.

No overclaiming
flowchart LR
  A[Telemetry and metadata] --> B{Input quality gates}
  B -->|valid enough| C[Diagnostic pressure budget]
  B -->|missing or inconsistent| D[Review flag]
  C --> E[Inferred efficiency]
  C --> F[Uniformity]
  C --> G[Uncertainty band]
  E --> H{Promotion boundary}
  F --> H
  G --> H
  H -->|current app| I[Reviewer decision support]
  H -. blocked .-> J[Exact downhole truth claims]
          
What is strong today
The engine now uses explicit pressure allocation, segmented pipe friction, friction-reducer scaling, dynamic ISIP anchoring, NWB/tortuosity allocation, erosion-aware diameter behavior, and visible uncertainty bands.
What still needs care
Slurry transport is still simplified, NWB/tortuosity remains a proxy-heavy model, rheology is reduced-order, and hydrostatic sign convention deserves domain review before any blind change.
What should never be implied
The app should not claim exact open clusters, exact live perforation truth, exact erosion truth, or autonomous control readiness from this diagnostic output alone.

Best next work

Highest-leverage improvements are in reliability and calibration

The core pressure-budget path is now credible enough for review. The next gains should reduce false certainty by strengthening the data path and adding better physics proof where the model remains approximate.

Prioritized
  1. Telemetry alignment and resampling

    Put channels on a unified time grid so slightly different source timestamps do not corrupt the budget.

  2. Unit normalization and import safety

    Continue making unit hints explicit and fail closed on ambiguous or unsupported incoming units.

  3. Missing-data behavior

    Keep missing required telemetry visible and reviewable instead of hiding uncertainty through zero-fill or carry-forward shortcuts.

  4. Slurry transport and calibration

    Move toward distributed wellbore slurry transport and better NWB/tortuosity calibration with controlled reference evidence.

Recommended loop: one problem at a time, one bounded implementation slice, build validation, browser validation, then a targeted research pass only when a physics or calibration question remains unresolved.