initial commit
This commit is contained in:
146
BRAINSTORM.md
Normal file
146
BRAINSTORM.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# Impakt — Brainstorm
|
||||
|
||||
Ideas, feature proposals, and design thoughts collected during development.
|
||||
Grouped by theme. Checkboxes indicate implementation status.
|
||||
|
||||
---
|
||||
|
||||
## Data Import & Channel Intelligence
|
||||
|
||||
- [ ] **Lazy channel loading** — For tests with 500+ channels, don't load `.dat` files until a channel is first accessed. Load headers eagerly, data lazily. This keeps `Session.open()` fast even for large datasets.
|
||||
- [ ] **Channel aliasing** — Allow users to define human-friendly aliases for channel codes. e.g., `"head_ax" -> "11HEAD0000ACXA"`. Aliases can live in templates or user config.
|
||||
- [ ] **Auto-detect signal type from channel code** — Use the measurement code (AC, FO, MO, DC...) to automatically suggest appropriate CFC class, plot axis labels, and unit handling.
|
||||
- [ ] **Batch loader** — Load an entire directory of tests at once. `Session.open_batch("/tests/series_2024/")` returns a `Series` object with cross-test operations.
|
||||
- [ ] **Channel search by physical meaning** — "Show me all head accelerations" should work across different test objects (driver, passenger, child dummies).
|
||||
- [ ] **Delta channel** — Compute the difference between the same channel across two tests. Useful for design iteration comparisons.
|
||||
- [ ] **NHTSA open data integration** — NHTSA publishes crash test data but in their own proprietary format (UDS), not ISO 13499 MME. No publicly available MME datasets exist (the format is industry-internal; Euro NCAP data is confidential). Build a UDS-to-MME converter or direct UDS reader plugin. NHTSA's NCAP database is at `nhtsa.gov/file-downloads?p=nhtsa/downloads/NCAP/` — the Access DB (`NCAP 6-14-10.mdb`) contains summary results, not time-history data. Contact NHTSA (SCI@dot.gov, 888-327-4236) for time-history channel data access.
|
||||
- [ ] **Channel metadata editor** — Sometimes `.chn` files have wrong units or missing metadata. Provide a non-destructive override mechanism stored in the session.
|
||||
- [ ] **Synthetic data generator** — Built into the test suite (`tests/fixtures/generate_mme.py`) for development. Consider exposing this as `impakt.synth` for users who want to prototype templates or test analysis scripts without real test data.
|
||||
|
||||
## Visualization
|
||||
|
||||
- [ ] **Synchronized zoom/pan** — When plotting multiple subplots (e.g., head accel + chest deflection), zoom/pan actions should sync across all subplots.
|
||||
- [ ] **Waterfall / 3D surface plots** — For tests with many similar channels (e.g., barrier face loads at multiple locations), a 3D surface or waterfall view shows spatial distribution.
|
||||
- [ ] **Animation mode** — Playback the crash event in time, with a vertical cursor sweeping across all plots simultaneously. Sync with video if available.
|
||||
- [ ] **Channel comparison sparklines** — In the channel tree sidebar, show tiny inline sparklines next to each channel name so engineers can visually identify signals before selecting them.
|
||||
- [ ] **Peak annotation** — Auto-annotate the peak value and time on plots. Toggle-able. Shows a marker + text label at the peak.
|
||||
- [ ] **Statistical overlays** — When viewing multiple tests, show mean +/- 1 sigma envelope, min/max envelope. Useful for repeatability studies.
|
||||
- [ ] **Color-by-test vs color-by-channel** — When overlaying multiple tests, let the user choose: each test gets a color (channels within a test share the color), or each channel gets a color (tests use line dash variants).
|
||||
- [ ] **Dark mode** — Engineers often work in labs with varying lighting. A dark theme would be appreciated.
|
||||
- [ ] **Custom plot layouts** — Allow 2x2, 3x1, 1x3, etc. subplot grids within a single view, each with independent channel selection.
|
||||
- [ ] **Persistent cursor positions** — When moving between plot views or applying transforms, cursor positions should persist in the session state.
|
||||
|
||||
## Signal Processing
|
||||
|
||||
- [ ] **Frequency spectrum viewer** — FFT / power spectral density of a channel. Helps diagnose noise, identify resonant frequencies, and verify CFC filter behavior.
|
||||
- [ ] **Integration / differentiation** — Integrate acceleration to get velocity/displacement. Differentiate displacement to get velocity. Track cumulative units.
|
||||
- [ ] **Cross-correlation** — Find time lag between two channels. Useful for understanding signal propagation through the vehicle structure.
|
||||
- [ ] **Envelope detection** — Compute the signal envelope (Hilbert transform). Useful for identifying amplitude trends.
|
||||
- [ ] **Window functions** — Apply Hanning, Hamming, etc. for spectral analysis pre-processing.
|
||||
- [ ] **Savitzky-Golay filter** — Alternative to CFC for smoothing that better preserves peaks.
|
||||
- [ ] **Event detection** — Automatically detect impact events from acceleration signals (threshold crossing, change-point detection). Useful for multi-event tests.
|
||||
- [ ] **Signal quality metrics** — Detect clipping, saturation, dropout, or excessive noise. Flag channels with potential data quality issues.
|
||||
|
||||
## Injury Criteria & Protocols
|
||||
|
||||
- [ ] **BrIC (Brain Injury Criterion)** — Rotational brain injury metric using angular velocity. Increasingly used in newer protocols.
|
||||
- [ ] **DAMAGE (Diffuse Axonal Multi-Axis General Evaluation)** — Related to BrIC, uses angular acceleration.
|
||||
- [ ] **SIMon / GHBMC coupling** — Interface with finite element head models for advanced brain injury assessment.
|
||||
- [ ] **Thorax Trauma Index (TTI)** — Side impact chest criterion.
|
||||
- [ ] **Abdominal Peak Force (APF)** — Abdomen criterion for side impact.
|
||||
- [ ] **Acetabular force** — Pelvis criterion for side impact (SID-IIs, WorldSID).
|
||||
- [ ] **Pedestrian criteria** — Head Impact Criterion for headform impactors, lower leg bending, upper leg force.
|
||||
- [ ] **Protocol version management UI** — Visual diff between protocol versions. Show what thresholds changed between e.g. Euro NCAP 2023 vs 2025.
|
||||
- [ ] **Custom protocol builder** — Let users define their own pass/fail criteria with custom thresholds. Useful for internal OEM targets that are stricter than regulation.
|
||||
- [ ] **Sensitivity analysis** — "What if" tool: how would the score change if HIC was 50 points lower? Slider-based interactive exploration.
|
||||
- [ ] **Regulatory compliance check** — Given a test, automatically check all applicable FMVSS / ECE regulation limits and flag any exceedances.
|
||||
|
||||
## Templates & Workflow
|
||||
|
||||
- [ ] **Template marketplace / sharing** — Allow teams to share templates. Git-based version control for template libraries. Team templates synced via a shared directory or Git repo.
|
||||
- [ ] **Template inheritance** — A template can extend another template. e.g., "My NCAP template" extends "Euro NCAP 2024" but adds custom corridors and extra plots.
|
||||
- [ ] **Channel auto-mapping** — When applying a template to a new test, auto-map channel patterns to actual available channels. Handle naming variations across test facilities.
|
||||
- [ ] **Template validation** — When a template references channel patterns, validate that the current test data has matching channels. Show warnings for missing channels.
|
||||
- [ ] **Quick comparison mode** — Two tests side-by-side with synchronized cursors. One-click "compare" from the template panel.
|
||||
- [ ] **Corridor management UI** — Visual editor for creating and editing tolerance corridors. Draw the envelope on a plot, export to CSV.
|
||||
- [ ] **Session history / undo** — Track a history of actions (transforms applied, cursors moved, channels added) with undo/redo.
|
||||
|
||||
## Reports
|
||||
|
||||
- [ ] **Multi-page reports** — Combine multiple plots + injury summary + protocol rating into a single PDF with automatic table of contents.
|
||||
- [ ] **Configurable report branding** — Company logo, header/footer text, color scheme. Stored in user config.
|
||||
- [ ] **Excel export** — Export criteria results and cursor values to Excel, not just PDF. Engineers love spreadsheets.
|
||||
- [ ] **PowerPoint export** — Generate slides with one plot per slide. Common request in OEM environments.
|
||||
- [ ] **Automated report narration** — Generate natural-language summary paragraphs: "The head acceleration exceeded the Euro NCAP green threshold at t=0.032s, resulting in a yellow rating for the head region."
|
||||
- [ ] **Report templates gallery** — Pre-built templates for common submission formats (NHTSA compliance report, Euro NCAP submission, IIHS test report).
|
||||
- [ ] **Comparison reports** — Automatically generate a report comparing two or more tests, highlighting differences in criteria and ratings.
|
||||
|
||||
## Integration & Automation
|
||||
|
||||
- [ ] **Jupyter notebook integration** — Impakt objects should display rich output in Jupyter (interactive Plotly plots, HTML tables). Consider `_repr_html_` on key objects.
|
||||
- [ ] **Watch mode** — Monitor a directory for new test data. When a new test appears (e.g., DAQ export completes), automatically apply a template and generate a report.
|
||||
- [ ] **CI/CD integration** — `impakt evaluate --protocol euro_ncap --exit-code` returns non-zero if any criterion fails. Useful for automated test validation pipelines.
|
||||
- [ ] **REST API mode** — Run Impakt as a server with a REST API for integration with other tools (CAE workflows, PLM systems).
|
||||
- [ ] **Pre/post-processing hooks** — User-defined Python functions that run before/after template application, criteria computation, or report generation. Part of the plugin system.
|
||||
- [ ] **CAE data import** — Read simulation results (LS-DYNA d3plot binodes, Abaqus ODB) so that test vs. simulation overlay is trivial.
|
||||
- [ ] **Video sync** — Link high-speed camera footage with channel data. Scrubbing the video moves the time cursor on plots, and vice versa.
|
||||
|
||||
## Performance & Scale
|
||||
|
||||
- [ ] **Memory-mapped data** — For very large datasets, use `numpy.memmap` to avoid loading everything into RAM.
|
||||
- [ ] **Channel cache** — Cache frequently-accessed transformed channels to avoid recomputing CFC filters every time.
|
||||
- [ ] **Parallel criteria computation** — Compute all injury criteria in parallel using `concurrent.futures`. The individual computations are independent.
|
||||
- [ ] **Web UI performance** — For 500-channel tests, the channel tree and dropdown become unwieldy. Implement virtualized scrolling and tree expansion.
|
||||
- [ ] **Progressive rendering** — Show plots immediately with low-resolution data, then refine with full-resolution data once loaded.
|
||||
|
||||
## Data Quality & Validation
|
||||
|
||||
- [ ] **Channel polarity check** — Verify SAE sign convention compliance. Detect if a channel appears to have inverted polarity (e.g., positive compression forces that should be negative).
|
||||
- [ ] **Sensor sanity checks** — Flag physically impossible values (e.g., head acceleration > 500g, negative femur tension during frontal impact).
|
||||
- [ ] **Inter-channel consistency** — Check that related channels are consistent (e.g., resultant acceleration is actually sqrt of sum of squares of components).
|
||||
- [ ] **Time sync verification** — Check that all channels have consistent timing (same sample rate, same trigger point, no time drift).
|
||||
- [ ] **Missing channel detection** — For a given protocol, check which required channels are missing from the test data and warn the user.
|
||||
|
||||
## User Experience
|
||||
|
||||
- [ ] **Keyboard shortcuts** — `Ctrl+1` through `Ctrl+9` for quick channel groups. `Space` to toggle play/pause on animation. `R` to reset zoom.
|
||||
- [ ] **Right-click context menus** — Right-click on a channel in the tree to apply common transforms, compute criteria, or export data.
|
||||
- [ ] **Drag-and-drop** — Drag channels from the tree onto the plot area. Drag a test directory onto the app to open it.
|
||||
- [ ] **Recent files** — Remember recently opened tests for quick access.
|
||||
- [ ] **Bookmarks** — Save specific views (channel selection + zoom range + cursors) as named bookmarks within a session.
|
||||
- [ ] **Multi-window** — Open multiple test sessions in separate browser tabs, synchronized or independent.
|
||||
- [ ] **Guided analysis wizards** — Step-by-step guided workflows: "Run frontal NCAP analysis" walks the user through channel selection, filtering, criteria computation, and report generation.
|
||||
- [ ] **Tooltip glossary** — Hover over terms like "HIC15", "CFC 180", "Nij" to see a brief explanation. Aids learning for junior engineers.
|
||||
|
||||
## Plugin Ideas
|
||||
|
||||
- [ ] **impakt-dicom** — Import DICOM medical imaging data for correlation with dummy injury metrics.
|
||||
- [ ] **impakt-catia** — Link to CATIA vehicle models for 3D visualization of sensor locations.
|
||||
- [ ] **impakt-abaqus** — Import Abaqus simulation results.
|
||||
- [ ] **impakt-lsdyna** — Import LS-DYNA simulation results (binout, d3plot).
|
||||
- [ ] **impakt-madymo** — Import MADYMO occupant simulation results.
|
||||
- [ ] **impakt-jncap** — Japanese NCAP scoring protocol.
|
||||
- [ ] **impakt-cncap** — Chinese NCAP scoring protocol.
|
||||
- [ ] **impakt-kncap** — Korean NCAP scoring protocol.
|
||||
- [ ] **impakt-ancap** — Australian NCAP scoring protocol.
|
||||
- [ ] **impakt-latinncap** — Latin NCAP scoring protocol.
|
||||
|
||||
---
|
||||
|
||||
## Architecture Notes
|
||||
|
||||
### Transform Pipeline Composition
|
||||
|
||||
Currently transforms are a flat chain. Consider a DAG-based pipeline for more complex workflows where a channel needs to be both filtered and unfiltered, with the resultant computed from filtered components but cursor values shown on unfiltered data.
|
||||
|
||||
### Real-time Collaboration
|
||||
|
||||
Multiple engineers often need to look at the same test data simultaneously during post-test review sessions. Consider WebSocket-based real-time sync of cursor positions, channel selections, and annotations. A "presenter mode" where one engineer drives and others follow.
|
||||
|
||||
### Offline Mode
|
||||
|
||||
The web UI requires a running Python server. For situations where engineers need to share results with non-technical stakeholders, consider an "export to static HTML" mode that bundles all data and Plotly.js into a single self-contained HTML file that can be opened in any browser.
|
||||
|
||||
### Standardization of Corridor Files
|
||||
|
||||
Corridors currently use a simple CSV format. Consider adopting or defining a more structured format that includes metadata (source protocol, year, applicable dummy type, confidence level). Corridors from official protocol documents could be bundled with the tool.
|
||||
78
pyproject.toml
Normal file
78
pyproject.toml
Normal file
@@ -0,0 +1,78 @@
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "impakt"
|
||||
version = "0.1.0"
|
||||
description = "Crash test data analysis, visualization, and reporting"
|
||||
readme = "README.md"
|
||||
license = "MIT"
|
||||
requires-python = ">=3.11"
|
||||
authors = [
|
||||
{ name = "Ben" },
|
||||
]
|
||||
keywords = ["crash-test", "automotive", "mme", "iso13499", "ncap", "iihs", "safety"]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha",
|
||||
"Intended Audience :: Science/Research",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Programming Language :: Python :: 3.12",
|
||||
"Programming Language :: Python :: 3.13",
|
||||
"Topic :: Scientific/Engineering",
|
||||
]
|
||||
|
||||
dependencies = [
|
||||
"numpy>=1.24",
|
||||
"scipy>=1.10",
|
||||
"plotly>=5.18",
|
||||
"dash>=2.14",
|
||||
"dash-bootstrap-components>=1.5",
|
||||
"pandas>=2.0",
|
||||
"pyyaml>=6.0",
|
||||
"jinja2>=3.1",
|
||||
"weasyprint>=60.0",
|
||||
"pydantic>=2.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
tdms = [
|
||||
"nptdms>=1.7",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"pytest>=7.0",
|
||||
"pytest-cov>=4.0",
|
||||
"ruff>=0.1",
|
||||
"mypy>=1.5",
|
||||
]
|
||||
|
||||
[project.entry-points."impakt.readers"]
|
||||
mme = "impakt.io.mme:MMEReader"
|
||||
|
||||
[project.scripts]
|
||||
impakt = "impakt.script.cli:main"
|
||||
|
||||
[tool.hatch.build.targets.wheel]
|
||||
packages = ["src/impakt"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
pythonpath = ["src"]
|
||||
filterwarnings = [
|
||||
"ignore::pytest.PytestCollectionWarning",
|
||||
]
|
||||
|
||||
[tool.ruff]
|
||||
target-version = "py311"
|
||||
line-length = 100
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "N", "W", "UP"]
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.11"
|
||||
strict = true
|
||||
7
src/impakt/__init__.py
Normal file
7
src/impakt/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""Impakt — Crash test data analysis, visualization, and reporting."""
|
||||
|
||||
__version__ = "0.1.0"
|
||||
|
||||
from impakt.script.api import Session, Template
|
||||
|
||||
__all__ = ["Session", "Template", "__version__"]
|
||||
BIN
src/impakt/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
src/impakt/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/__pycache__/__init__.cpython-314.pyc
Normal file
BIN
src/impakt/__pycache__/__init__.cpython-314.pyc
Normal file
Binary file not shown.
28
src/impakt/channel/__init__.py
Normal file
28
src/impakt/channel/__init__.py
Normal file
@@ -0,0 +1,28 @@
|
||||
"""Channel data model and ISO naming intelligence."""
|
||||
|
||||
from impakt.channel.code import ChannelCode, parse_channel_code
|
||||
from impakt.channel.group import auto_group, build_channel_tree, find_resultant_candidates
|
||||
from impakt.channel.model import (
|
||||
Channel,
|
||||
ChannelGroup,
|
||||
DummyInfo,
|
||||
ImpactConfig,
|
||||
TestData,
|
||||
TestMetadata,
|
||||
VehicleInfo,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"Channel",
|
||||
"ChannelCode",
|
||||
"ChannelGroup",
|
||||
"DummyInfo",
|
||||
"ImpactConfig",
|
||||
"TestData",
|
||||
"TestMetadata",
|
||||
"VehicleInfo",
|
||||
"auto_group",
|
||||
"build_channel_tree",
|
||||
"find_resultant_candidates",
|
||||
"parse_channel_code",
|
||||
]
|
||||
BIN
src/impakt/channel/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
src/impakt/channel/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/__init__.cpython-314.pyc
Normal file
BIN
src/impakt/channel/__pycache__/__init__.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/code.cpython-312.pyc
Normal file
BIN
src/impakt/channel/__pycache__/code.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/code.cpython-314.pyc
Normal file
BIN
src/impakt/channel/__pycache__/code.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/group.cpython-312.pyc
Normal file
BIN
src/impakt/channel/__pycache__/group.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/group.cpython-314.pyc
Normal file
BIN
src/impakt/channel/__pycache__/group.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/lookup.cpython-312.pyc
Normal file
BIN
src/impakt/channel/__pycache__/lookup.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/lookup.cpython-314.pyc
Normal file
BIN
src/impakt/channel/__pycache__/lookup.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/model.cpython-312.pyc
Normal file
BIN
src/impakt/channel/__pycache__/model.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/channel/__pycache__/model.cpython-314.pyc
Normal file
BIN
src/impakt/channel/__pycache__/model.cpython-314.pyc
Normal file
Binary file not shown.
292
src/impakt/channel/code.py
Normal file
292
src/impakt/channel/code.py
Normal file
@@ -0,0 +1,292 @@
|
||||
"""ISO channel naming convention parser.
|
||||
|
||||
Decomposes channel codes per SAE J211-1 / ISO 13499 into structured fields,
|
||||
enabling automatic grouping, human-readable labeling, and criteria matching.
|
||||
|
||||
Two code formats are supported:
|
||||
|
||||
**16-character ISO 13499 (with dummy type)**::
|
||||
|
||||
11HEAD0000H3ACXP
|
||||
^^ ^^^^ ^^^^
|
||||
| | | | ||`- sense (P/N)
|
||||
| | | | |`-- direction (X/Y/Z/R/0)
|
||||
| | | | `--- measurement (AC/FO/MO/DS/DC/VE/AN...)
|
||||
| | | `----- dummy type (H3/P3/PC/S2/WS...)
|
||||
| | `-------- fine location (0000/UP00/LE00...)
|
||||
| `------------- main location (HEAD/NECK/CHST...)
|
||||
`---------------- test object (11/12/13/10/B0/D0...)
|
||||
|
||||
**14-character simplified (no dummy type)**::
|
||||
|
||||
11HEAD0000ACXA
|
||||
^^ ^^^^ ^^
|
||||
| | | ||`- sense (A/P/N)
|
||||
| | | |`-- direction (X/Y/Z/R)
|
||||
| | | `--- measurement (AC/FO/MO...)
|
||||
| | `------- fine location
|
||||
| `------------ main location
|
||||
`--------------- test object
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Self
|
||||
|
||||
from impakt.channel.lookup import (
|
||||
DIRECTIONS,
|
||||
FINE_LOCATIONS,
|
||||
MAIN_LOCATIONS,
|
||||
MEASUREMENTS,
|
||||
SENSE_CODES,
|
||||
TEST_OBJECTS,
|
||||
)
|
||||
|
||||
# Known dummy/sensor type codes at positions 11-12 of 16-char codes
|
||||
DUMMY_TYPE_CODES: set[str] = {
|
||||
"H3", # Hybrid III
|
||||
"H2", # Hybrid II
|
||||
"P3", # P-series (3-year-old child)
|
||||
"P6", # P-series (6-year-old child)
|
||||
"PC", # Pedestrian child headform
|
||||
"PA", # Pedestrian adult headform
|
||||
"S2", # SID-IIs
|
||||
"ES", # ES-2re
|
||||
"WS", # WorldSID
|
||||
"TH", # THOR
|
||||
"Q0", # Q-series child dummy
|
||||
"Q1", # Q1/Q1.5
|
||||
"Q3", # Q3
|
||||
"Q6", # Q6
|
||||
"BF", # BioFidel
|
||||
"00", # Generic/structural (often barrier or vehicle channels)
|
||||
}
|
||||
|
||||
|
||||
def _has_embedded_dummy(code: str) -> bool:
|
||||
"""Detect if positions 11-12 contain a dummy type code.
|
||||
|
||||
If positions 11-12 are a known dummy code AND positions 13-14 are a
|
||||
known measurement code, the code uses the 16-char format.
|
||||
"""
|
||||
if len(code) < 16:
|
||||
return False
|
||||
candidate_dummy = code[10:12]
|
||||
candidate_meas = code[12:14]
|
||||
return candidate_dummy in DUMMY_TYPE_CODES and candidate_meas in MEASUREMENTS
|
||||
|
||||
|
||||
@dataclass(frozen=True, slots=True)
|
||||
class ChannelCode:
|
||||
"""Parsed ISO channel naming code.
|
||||
|
||||
Handles both 16-char ISO 13499 codes (with embedded dummy type) and
|
||||
14-char simplified codes.
|
||||
|
||||
Attributes:
|
||||
raw: Original unparsed string.
|
||||
test_object: Positions 1-2 (e.g., '11' = driver).
|
||||
main_location: Positions 3-6 (e.g., 'HEAD').
|
||||
fine_location: Positions 7-10 (e.g., '0000').
|
||||
dummy_type: Positions 11-12 in 16-char codes (e.g., 'H3'). Empty for 14-char.
|
||||
measurement: The measurement type (e.g., 'AC', 'FO', 'MO').
|
||||
direction: Axis character (X/Y/Z/R/0).
|
||||
sense: Sign convention character (P/N/A).
|
||||
"""
|
||||
|
||||
raw: str
|
||||
test_object: str = ""
|
||||
main_location: str = ""
|
||||
fine_location: str = ""
|
||||
dummy_type: str = ""
|
||||
measurement: str = ""
|
||||
direction: str = ""
|
||||
sense: str = ""
|
||||
filter_class: str = ""
|
||||
_valid: bool = field(default=False, repr=False)
|
||||
|
||||
@classmethod
|
||||
def parse(cls, raw: str) -> Self:
|
||||
"""Parse a raw channel code string.
|
||||
|
||||
Auto-detects whether the code is 16-char (with dummy type) or
|
||||
14-char (without). Tolerant of variations.
|
||||
"""
|
||||
code = raw.strip().upper()
|
||||
|
||||
if len(code) < 14:
|
||||
return cls(raw=raw, _valid=False)
|
||||
|
||||
test_object = code[0:2]
|
||||
main_location = code[2:6]
|
||||
fine_location = code[6:10]
|
||||
|
||||
if _has_embedded_dummy(code):
|
||||
# 16-character format: pos 11-12 = dummy, 13-14 = meas, 15 = dir, 16 = sense
|
||||
dummy_type = code[10:12]
|
||||
measurement = code[12:14]
|
||||
direction = code[14:15] if len(code) > 14 else ""
|
||||
sense = code[15:16] if len(code) > 15 else ""
|
||||
filter_class = ""
|
||||
else:
|
||||
# 14-character format: pos 11-12 = meas, 13 = dir, 14 = sense
|
||||
dummy_type = ""
|
||||
measurement = code[10:12]
|
||||
direction = code[12:13]
|
||||
sense = code[13:14] if len(code) > 13 else ""
|
||||
filter_class = code[14:16] if len(code) > 14 else ""
|
||||
|
||||
return cls(
|
||||
raw=raw,
|
||||
test_object=test_object,
|
||||
main_location=main_location,
|
||||
fine_location=fine_location,
|
||||
dummy_type=dummy_type,
|
||||
measurement=measurement,
|
||||
direction=direction,
|
||||
sense=sense,
|
||||
filter_class=filter_class,
|
||||
_valid=True,
|
||||
)
|
||||
|
||||
@property
|
||||
def is_valid(self) -> bool:
|
||||
return self._valid
|
||||
|
||||
def group_key(self) -> str:
|
||||
"""Key for grouping X/Y/Z component channels.
|
||||
|
||||
Two channels belong to the same group if they share everything
|
||||
except the direction field. This enables automatic resultant
|
||||
computation.
|
||||
"""
|
||||
if not self._valid:
|
||||
return self.raw
|
||||
return (
|
||||
f"{self.test_object}{self.main_location}"
|
||||
f"{self.fine_location}{self.dummy_type}{self.measurement}"
|
||||
f"_{self.sense}"
|
||||
)
|
||||
|
||||
def is_component(self) -> bool:
|
||||
"""Whether this is an X, Y, or Z component (not a resultant)."""
|
||||
return self.direction in ("X", "Y", "Z")
|
||||
|
||||
def is_resultant(self) -> bool:
|
||||
"""Whether this is a pre-computed resultant channel."""
|
||||
return self.direction == "R"
|
||||
|
||||
def axis(self) -> str | None:
|
||||
"""Return the axis letter, or None if resultant or unknown."""
|
||||
if self.direction in ("X", "Y", "Z"):
|
||||
return self.direction
|
||||
return None
|
||||
|
||||
# ----- Human-readable descriptions -----
|
||||
|
||||
@property
|
||||
def test_object_label(self) -> str:
|
||||
return TEST_OBJECTS.get(self.test_object, f"Object {self.test_object}")
|
||||
|
||||
@property
|
||||
def location_label(self) -> str:
|
||||
main = MAIN_LOCATIONS.get(self.main_location, self.main_location)
|
||||
fine = FINE_LOCATIONS.get(self.fine_location, "")
|
||||
if fine and fine != "Center of Gravity / Primary":
|
||||
return f"{main} ({fine})"
|
||||
return main
|
||||
|
||||
@property
|
||||
def dummy_type_label(self) -> str:
|
||||
labels = {
|
||||
"H3": "Hybrid III",
|
||||
"H2": "Hybrid II",
|
||||
"P3": "P3 Child",
|
||||
"P6": "P6 Child",
|
||||
"PC": "Ped. Child",
|
||||
"PA": "Ped. Adult",
|
||||
"S2": "SID-IIs",
|
||||
"ES": "ES-2re",
|
||||
"WS": "WorldSID",
|
||||
"TH": "THOR",
|
||||
"Q1": "Q1.5",
|
||||
"Q3": "Q3",
|
||||
"Q6": "Q6",
|
||||
}
|
||||
return labels.get(self.dummy_type, self.dummy_type)
|
||||
|
||||
@property
|
||||
def measurement_label(self) -> str:
|
||||
info = MEASUREMENTS.get(self.measurement)
|
||||
if info:
|
||||
return info[0]
|
||||
return self.measurement
|
||||
|
||||
@property
|
||||
def measurement_unit(self) -> str:
|
||||
info = MEASUREMENTS.get(self.measurement)
|
||||
if info:
|
||||
return info[1]
|
||||
return ""
|
||||
|
||||
@property
|
||||
def direction_label(self) -> str:
|
||||
return DIRECTIONS.get(self.direction, self.direction)
|
||||
|
||||
@property
|
||||
def sense_label(self) -> str:
|
||||
return SENSE_CODES.get(self.sense, self.sense)
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
"""Full human-readable description.
|
||||
|
||||
Example: ``Driver Head (Hybrid III) — Acceleration X (Longitudinal)``
|
||||
"""
|
||||
parts = [self.test_object_label, self.location_label]
|
||||
if self.dummy_type:
|
||||
parts.append(f"({self.dummy_type_label})")
|
||||
parts.extend(["—", self.measurement_label, self.direction_label])
|
||||
return " ".join(parts)
|
||||
|
||||
@property
|
||||
def short_label(self) -> str:
|
||||
"""Short label suitable for plot legends.
|
||||
|
||||
Example: ``Head Accel X``
|
||||
"""
|
||||
loc = MAIN_LOCATIONS.get(self.main_location, self.main_location)
|
||||
meas = self.measurement_label
|
||||
abbrev = {
|
||||
"Acceleration": "Accel",
|
||||
"Displacement": "Disp",
|
||||
"Deflection": "Defl",
|
||||
"Velocity": "Vel",
|
||||
"Angular Velocity": "Ang Vel",
|
||||
"Angular Acceleration": "Ang Accel",
|
||||
}
|
||||
meas = abbrev.get(meas, meas)
|
||||
return f"{loc} {meas} {self.direction}"
|
||||
|
||||
def matches(self, pattern: str) -> bool:
|
||||
"""Check if this channel code matches a glob-like pattern.
|
||||
|
||||
Supports ``*`` (any characters) and ``{X,Y,Z}`` set notation.
|
||||
Matches against the raw code string.
|
||||
"""
|
||||
regex = pattern.replace(".", r"\.")
|
||||
regex = re.sub(
|
||||
r"\{([^}]+)\}",
|
||||
lambda m: "(" + "|".join(re.escape(x) for x in m.group(1).split(",")) + ")",
|
||||
regex,
|
||||
)
|
||||
regex = regex.replace("*", ".*")
|
||||
regex = f"^{regex}$"
|
||||
return bool(re.match(regex, self.raw, re.IGNORECASE))
|
||||
|
||||
|
||||
def parse_channel_code(raw: str) -> ChannelCode:
|
||||
"""Convenience function to parse a channel code string."""
|
||||
return ChannelCode.parse(raw)
|
||||
84
src/impakt/channel/group.py
Normal file
84
src/impakt/channel/group.py
Normal file
@@ -0,0 +1,84 @@
|
||||
"""Channel grouping utilities.
|
||||
|
||||
Provides functions for auto-detecting channel groups and building
|
||||
hierarchical channel trees from ISO naming conventions.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from collections import defaultdict
|
||||
|
||||
from impakt.channel.model import Channel, ChannelGroup
|
||||
|
||||
|
||||
def auto_group(channels: dict[str, Channel]) -> dict[str, ChannelGroup]:
|
||||
"""Group channels into X/Y/Z component families.
|
||||
|
||||
Channels are grouped by their ``group_key()`` — they share test object,
|
||||
location, measurement, and sense, differing only in direction (X/Y/Z).
|
||||
|
||||
Args:
|
||||
channels: Dictionary of channel name -> Channel.
|
||||
|
||||
Returns:
|
||||
Dictionary of group key -> ChannelGroup.
|
||||
"""
|
||||
temp: dict[str, dict[str, Channel]] = defaultdict(dict)
|
||||
|
||||
for ch in channels.values():
|
||||
if not ch.code.is_valid or not ch.code.is_component():
|
||||
continue
|
||||
gkey = ch.code.group_key()
|
||||
axis = ch.code.direction
|
||||
temp[gkey][axis] = ch
|
||||
|
||||
groups: dict[str, ChannelGroup] = {}
|
||||
for gkey, axes in temp.items():
|
||||
groups[gkey] = ChannelGroup(
|
||||
key=gkey,
|
||||
x=axes.get("X"),
|
||||
y=axes.get("Y"),
|
||||
z=axes.get("Z"),
|
||||
)
|
||||
|
||||
return groups
|
||||
|
||||
|
||||
def find_resultant_candidates(channels: dict[str, Channel]) -> list[ChannelGroup]:
|
||||
"""Find all channel groups that have at least 2 components.
|
||||
|
||||
These are candidates for automatic resultant computation.
|
||||
"""
|
||||
groups = auto_group(channels)
|
||||
return [g for g in groups.values() if len(g.components()) >= 2]
|
||||
|
||||
|
||||
def build_channel_tree(
|
||||
channels: dict[str, Channel],
|
||||
) -> dict[str, dict[str, dict[str, list[Channel]]]]:
|
||||
"""Build a hierarchical tree of channels for UI display.
|
||||
|
||||
Returns:
|
||||
Nested dict: {test_object_label: {location_label: {measurement_label: [channels]}}}
|
||||
"""
|
||||
tree: dict[str, dict[str, dict[str, list[Channel]]]] = {}
|
||||
|
||||
for ch in channels.values():
|
||||
if not ch.code.is_valid:
|
||||
obj = "Other"
|
||||
loc = "Unknown"
|
||||
meas = "Unknown"
|
||||
else:
|
||||
obj = ch.code.test_object_label
|
||||
loc = ch.code.location_label
|
||||
meas = ch.code.measurement_label
|
||||
|
||||
tree.setdefault(obj, {}).setdefault(loc, {}).setdefault(meas, []).append(ch)
|
||||
|
||||
# Sort channels within each leaf
|
||||
for obj_dict in tree.values():
|
||||
for loc_dict in obj_dict.values():
|
||||
for meas_key in loc_dict:
|
||||
loc_dict[meas_key] = sorted(loc_dict[meas_key], key=lambda c: c.name)
|
||||
|
||||
return tree
|
||||
326
src/impakt/channel/lookup.py
Normal file
326
src/impakt/channel/lookup.py
Normal file
@@ -0,0 +1,326 @@
|
||||
"""ISO channel naming lookup tables for crash test instrumentation.
|
||||
|
||||
Based on SAE J211-1 and ISO 13499 conventions. These tables map the positional
|
||||
codes in a 16-character channel name to human-readable descriptions.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test Object codes (positions 1-2)
|
||||
# First digit: row/category, Second digit: seat position or object type
|
||||
# ---------------------------------------------------------------------------
|
||||
TEST_OBJECTS: dict[str, str] = {
|
||||
# Sled tests
|
||||
"01": "Sled Occupant",
|
||||
# Vehicle structure
|
||||
"10": "Vehicle Structure",
|
||||
# First row occupants
|
||||
"11": "Driver (1st Row Left)",
|
||||
"12": "Front Passenger (1st Row Right)",
|
||||
# Second row
|
||||
"13": "2nd Row Left",
|
||||
"14": "2nd Row Center",
|
||||
"15": "2nd Row Right",
|
||||
# Third row
|
||||
"16": "3rd Row Left",
|
||||
"17": "3rd Row Center",
|
||||
"18": "3rd Row Right",
|
||||
# Barrier / impactor
|
||||
"20": "Barrier / Impactor",
|
||||
"21": "Moving Deformable Barrier",
|
||||
# Pedestrian
|
||||
"30": "Pedestrian / Headform Impactor",
|
||||
"31": "Pedestrian Upper Legform",
|
||||
"32": "Pedestrian Lower Legform",
|
||||
# Barrier object codes (ISO 13499)
|
||||
"B0": "Barrier",
|
||||
"B1": "Barrier Row 1",
|
||||
"B2": "Barrier Row 2",
|
||||
# Impactor codes
|
||||
"D0": "Impactor",
|
||||
"D1": "Impactor 1",
|
||||
"D2": "Impactor 2",
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main Location codes (positions 3-6)
|
||||
# ---------------------------------------------------------------------------
|
||||
MAIN_LOCATIONS: dict[str, str] = {
|
||||
# Head / Neck
|
||||
"HEAD": "Head",
|
||||
"SKULL": "Skull",
|
||||
"FACE": "Face",
|
||||
"NECK": "Neck",
|
||||
"NCKL": "Neck Lower",
|
||||
"NCKU": "Neck Upper",
|
||||
# Thorax
|
||||
"CHST": "Chest",
|
||||
"THSP": "Thoracic Spine",
|
||||
"CLAV": "Clavicle",
|
||||
"RIBS": "Ribs",
|
||||
"STRN": "Sternum",
|
||||
"SHLR": "Shoulder",
|
||||
"SHLD": "Shoulder",
|
||||
"SHLL": "Shoulder Left",
|
||||
"SHLR": "Shoulder Right",
|
||||
# Spine
|
||||
"SPIN": "Spine",
|
||||
"LUSP": "Lumbar Spine",
|
||||
# Abdomen / Pelvis
|
||||
"ABDO": "Abdomen",
|
||||
"PELV": "Pelvis",
|
||||
"ILAC": "Iliac Crest",
|
||||
"SACR": "Sacrum",
|
||||
"PUBC": "Pubic Symphysis",
|
||||
# Upper extremities
|
||||
"SHLD": "Shoulder",
|
||||
"UPRA": "Upper Arm",
|
||||
"ELBO": "Elbow",
|
||||
"FORA": "Forearm",
|
||||
"WRST": "Wrist",
|
||||
"HAND": "Hand",
|
||||
# Lower extremities
|
||||
"FEMR": "Femur",
|
||||
"KNEE": "Knee",
|
||||
"TIBI": "Tibia",
|
||||
"ANKL": "Ankle",
|
||||
"FOOT": "Foot",
|
||||
# Vehicle structure locations
|
||||
"STCL": "Steering Column",
|
||||
"STRW": "Steering Wheel",
|
||||
"DASH": "Dashboard / IP",
|
||||
"BPIL": "B-Pillar",
|
||||
"APIL": "A-Pillar",
|
||||
"CPIL": "C-Pillar",
|
||||
"DOOR": "Door",
|
||||
"DRIM": "Door Trim",
|
||||
"SEAT": "Seat",
|
||||
"STTK": "Seat Track",
|
||||
"BELT": "Seat Belt",
|
||||
"FLOR": "Floor",
|
||||
"ROOF": "Roof",
|
||||
"FIRE": "Firewall",
|
||||
"TOEP": "Toepan / Footwell",
|
||||
"INST": "Instrument Panel",
|
||||
"PEDB": "Pedal Box",
|
||||
"BRKP": "Brake Pedal",
|
||||
"ACCP": "Accelerator Pedal",
|
||||
"CLTP": "Clutch Pedal",
|
||||
# Vehicle general
|
||||
"VEHC": "Vehicle (General)",
|
||||
"ENGN": "Engine",
|
||||
"BATT": "Battery",
|
||||
"FUEL": "Fuel System",
|
||||
# Barrier
|
||||
"BARR": "Barrier",
|
||||
"BFAC": "Barrier Face",
|
||||
# Foot / toes
|
||||
"TOES": "Toes",
|
||||
# Belt positions
|
||||
"BLOP": "Belt (Lap/Shoulder)",
|
||||
# Wheel
|
||||
"WHEL": "Wheel",
|
||||
# Structural — additional
|
||||
"FORA": "Floor Rail",
|
||||
"FBAR": "Barrier Face",
|
||||
"KNSL": "Knee Slider",
|
||||
# Simulation / other
|
||||
"SIMN": "Simulation Node",
|
||||
"OTHR": "Other",
|
||||
"0000": "General / Unknown",
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fine Location codes (positions 7-10)
|
||||
# ---------------------------------------------------------------------------
|
||||
FINE_LOCATIONS: dict[str, str] = {
|
||||
"0000": "Center of Gravity / Primary",
|
||||
"00UP": "Upper",
|
||||
"UP00": "Upper",
|
||||
"00LO": "Lower",
|
||||
"LO00": "Lower",
|
||||
"00LE": "Left",
|
||||
"LE00": "Left",
|
||||
"00RI": "Right",
|
||||
"RI00": "Right",
|
||||
"00FR": "Front",
|
||||
"FR00": "Front",
|
||||
"00RE": "Rear",
|
||||
"RE00": "Rear",
|
||||
"00IN": "Inner",
|
||||
"IN00": "Inner",
|
||||
"00OU": "Outer",
|
||||
"OU00": "Outer",
|
||||
"LEUP": "Left Upper",
|
||||
"RIUP": "Right Upper",
|
||||
"LELO": "Left Lower",
|
||||
"RILO": "Right Lower",
|
||||
"LETP": "Left Proximal",
|
||||
"RITP": "Right Proximal",
|
||||
"LETD": "Left Distal",
|
||||
"RITD": "Right Distal",
|
||||
# Rib positions (THOR / ES-2re)
|
||||
"RB01": "Rib 1",
|
||||
"RB02": "Rib 2",
|
||||
"RB03": "Rib 3",
|
||||
"RB04": "Rib 4",
|
||||
"RB05": "Rib 5",
|
||||
"RB06": "Rib 6",
|
||||
# Spine vertebrae
|
||||
"T001": "T1 Vertebra",
|
||||
"T004": "T4 Vertebra",
|
||||
"T008": "T8 Vertebra",
|
||||
"T012": "T12 Vertebra",
|
||||
"L001": "L1 Vertebra",
|
||||
"L002": "L2 Vertebra",
|
||||
"L003": "L3 Vertebra",
|
||||
"L005": "L5 Vertebra",
|
||||
# THOR IR-TRACC positions
|
||||
"ULXX": "Upper Left",
|
||||
"URXX": "Upper Right",
|
||||
"LLXX": "Lower Left",
|
||||
"LRXX": "Lower Right",
|
||||
# Tibia positions (real MME data uses these)
|
||||
"LEUP": "Left Upper",
|
||||
"RIUP": "Right Upper",
|
||||
"LELO": "Left Lower",
|
||||
"RILO": "Right Lower",
|
||||
# Fine location with number suffix
|
||||
"0001": "Secondary",
|
||||
"0100": "Lower Position",
|
||||
"0101": "Lower Secondary",
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Physical Measurement codes (positions 11-12)
|
||||
# ---------------------------------------------------------------------------
|
||||
MEASUREMENTS: dict[str, tuple[str, str]] = {
|
||||
# (description, typical SI unit)
|
||||
"AC": ("Acceleration", "m/s²"),
|
||||
"FO": ("Force", "N"),
|
||||
"MO": ("Moment", "N·m"),
|
||||
"DS": ("Displacement", "mm"),
|
||||
"DC": ("Deflection", "mm"),
|
||||
"VE": ("Velocity", "m/s"),
|
||||
"AN": ("Angle", "deg"),
|
||||
"AV": ("Angular Velocity", "deg/s"),
|
||||
"AA": ("Angular Acceleration", "rad/s²"),
|
||||
"PR": ("Pressure", "kPa"),
|
||||
"TE": ("Temperature", "°C"),
|
||||
"ST": ("Strain", "µε"),
|
||||
"EN": ("Energy", "J"),
|
||||
"PW": ("Power", "W"),
|
||||
"VO": ("Voltage", "V"),
|
||||
"CU": ("Current", "A"),
|
||||
"TI": ("Time", "s"),
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Direction / Axis codes (position 13)
|
||||
# ---------------------------------------------------------------------------
|
||||
DIRECTIONS: dict[str, str] = {
|
||||
"X": "X (Longitudinal)",
|
||||
"Y": "Y (Lateral)",
|
||||
"Z": "Z (Vertical)",
|
||||
"R": "Resultant",
|
||||
}
|
||||
|
||||
# SAE J211 axis sign conventions (positive direction)
|
||||
SAE_SIGN_CONVENTIONS: dict[str, str] = {
|
||||
"X": "Forward (positive fore)",
|
||||
"Y": "Right (positive starboard)",
|
||||
"Z": "Downward (positive down)",
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Sense codes (position 14)
|
||||
# ---------------------------------------------------------------------------
|
||||
SENSE_CODES: dict[str, str] = {
|
||||
"A": "SAE Sign Convention A",
|
||||
"P": "Positive (SAE)",
|
||||
"N": "Negative",
|
||||
"V": "Vehicle Reference",
|
||||
"0": "Unsigned / Scalar",
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# CFC Filter classes — mapping class number to -3dB cutoff frequency
|
||||
# Per SAE J211-1: f_cutoff = CFC * (5/3)
|
||||
# ---------------------------------------------------------------------------
|
||||
CFC_CLASSES: dict[int, float] = {
|
||||
60: 100.0,
|
||||
180: 300.0,
|
||||
600: 1000.0,
|
||||
1000: 1650.0,
|
||||
}
|
||||
|
||||
CFC_MINIMUM_SAMPLE_RATES: dict[int, float] = {
|
||||
60: 800.0,
|
||||
180: 2000.0,
|
||||
600: 6667.0,
|
||||
1000: 10000.0,
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Common dummy types and their identifiers
|
||||
# ---------------------------------------------------------------------------
|
||||
DUMMY_TYPES: dict[str, str] = {
|
||||
"H3-50M": "Hybrid III 50th Percentile Male",
|
||||
"H3-05F": "Hybrid III 5th Percentile Female",
|
||||
"H3-95M": "Hybrid III 95th Percentile Male",
|
||||
"THOR-50M": "THOR 50th Percentile Male",
|
||||
"SID-IIs": "SID-IIs (Side Impact)",
|
||||
"ES-2re": "ES-2re (Side Impact)",
|
||||
"WSID-50": "WorldSID 50th Percentile",
|
||||
"Q1.5": "Q1.5 Child Dummy (18 months)",
|
||||
"Q3": "Q3 Child Dummy (3 years)",
|
||||
"Q6": "Q6 Child Dummy (6 years)",
|
||||
"Q10": "Q10 Child Dummy (10 years)",
|
||||
"CRABI": "CRABI 12-Month Infant",
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit normalization — map common non-SI labels to standard forms
|
||||
# ---------------------------------------------------------------------------
|
||||
UNIT_ALIASES: dict[str, str] = {
|
||||
"g": "g",
|
||||
"G": "g",
|
||||
"m/s2": "m/s²",
|
||||
"m/s^2": "m/s²",
|
||||
"N": "N",
|
||||
"kN": "kN",
|
||||
"Nm": "N·m",
|
||||
"N.m": "N·m",
|
||||
"N*m": "N·m",
|
||||
"mm": "mm",
|
||||
"m": "m",
|
||||
"cm": "cm",
|
||||
"m/s": "m/s",
|
||||
"km/h": "km/h",
|
||||
"deg": "deg",
|
||||
"rad": "rad",
|
||||
"deg/s": "deg/s",
|
||||
"rad/s": "rad/s",
|
||||
"rad/s2": "rad/s²",
|
||||
"rad/s^2": "rad/s²",
|
||||
"kPa": "kPa",
|
||||
"bar": "bar",
|
||||
"Pa": "Pa",
|
||||
"C": "°C",
|
||||
"degC": "°C",
|
||||
"°C": "°C",
|
||||
"V": "V",
|
||||
"A": "A",
|
||||
"J": "J",
|
||||
"W": "W",
|
||||
"microstrain": "µε",
|
||||
"µε": "µε",
|
||||
"ue": "µε",
|
||||
}
|
||||
|
||||
|
||||
def normalize_unit(raw_unit: str) -> str:
|
||||
"""Normalize a unit string to a standard form."""
|
||||
stripped = raw_unit.strip()
|
||||
return UNIT_ALIASES.get(stripped, stripped)
|
||||
456
src/impakt/channel/model.py
Normal file
456
src/impakt/channel/model.py
Normal file
@@ -0,0 +1,456 @@
|
||||
"""Core data model for crash test channels and test data.
|
||||
|
||||
All channel objects are immutable — transforms produce new Channel instances
|
||||
rather than modifying existing ones.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import fnmatch
|
||||
from collections import defaultdict
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import date
|
||||
from pathlib import Path
|
||||
from typing import Any, Iterator
|
||||
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from impakt.channel.code import ChannelCode
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Metadata models
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class VehicleInfo:
|
||||
"""Vehicle identification and physical properties."""
|
||||
|
||||
make: str = ""
|
||||
model: str = ""
|
||||
year: int = 0
|
||||
vin: str = ""
|
||||
mass_kg: float = 0.0
|
||||
vehicle_type: str = ""
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class DummyInfo:
|
||||
"""Crash test dummy identification."""
|
||||
|
||||
dummy_type: str = ""
|
||||
serial: str = ""
|
||||
position: str = ""
|
||||
mass_kg: float = 0.0
|
||||
|
||||
@property
|
||||
def is_hybrid3_50m(self) -> bool:
|
||||
return "50" in self.dummy_type and ("H3" in self.dummy_type or "Hybrid" in self.dummy_type)
|
||||
|
||||
@property
|
||||
def is_hybrid3_5f(self) -> bool:
|
||||
return "5" in self.dummy_type and "F" in self.dummy_type.upper()
|
||||
|
||||
@property
|
||||
def is_thor(self) -> bool:
|
||||
return "THOR" in self.dummy_type.upper()
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ImpactConfig:
|
||||
"""Test impact configuration."""
|
||||
|
||||
test_type: str = ""
|
||||
speed_kmh: float = 0.0
|
||||
barrier_type: str = ""
|
||||
impact_angle_deg: float = 0.0
|
||||
overlap_percent: float = 0.0
|
||||
impact_side: str = ""
|
||||
standard: str = ""
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TestMetadata:
|
||||
"""Top-level test metadata parsed from MME.ini."""
|
||||
|
||||
test_number: str = ""
|
||||
test_date: date | None = None
|
||||
test_facility: str = ""
|
||||
description: str = ""
|
||||
vehicle: VehicleInfo = field(default_factory=VehicleInfo)
|
||||
dummy: DummyInfo = field(default_factory=DummyInfo)
|
||||
impact: ImpactConfig = field(default_factory=ImpactConfig)
|
||||
extra: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Channel model
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Channel:
|
||||
"""An immutable time-series data channel.
|
||||
|
||||
Attributes:
|
||||
name: Raw channel code string (e.g., '11HEAD0000H3ACXA').
|
||||
code: Parsed ChannelCode with structured field access.
|
||||
data: NumPy array of sample values.
|
||||
time: NumPy array of time values (seconds), same length as data.
|
||||
unit: Physical unit of the data values.
|
||||
sample_rate: Sampling rate in Hz.
|
||||
cfc_class: CFC filter class applied to this data, if any.
|
||||
metadata: Additional key-value metadata from the .chn header.
|
||||
source_test_id: ID of the test this channel belongs to.
|
||||
transform_history: List of transform descriptions applied.
|
||||
"""
|
||||
|
||||
name: str
|
||||
code: ChannelCode
|
||||
data: NDArray[np.floating[Any]]
|
||||
time: NDArray[np.floating[Any]]
|
||||
unit: str = ""
|
||||
sample_rate: float = 0.0
|
||||
cfc_class: int | None = None
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
source_test_id: str = ""
|
||||
transform_history: tuple[str, ...] = ()
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if len(self.data) != len(self.time):
|
||||
raise ValueError(
|
||||
f"Channel {self.name}: data length ({len(self.data)}) != "
|
||||
f"time length ({len(self.time)})"
|
||||
)
|
||||
|
||||
@property
|
||||
def duration(self) -> float:
|
||||
"""Duration of the channel in seconds."""
|
||||
if len(self.time) < 2:
|
||||
return 0.0
|
||||
return float(self.time[-1] - self.time[0])
|
||||
|
||||
@property
|
||||
def n_samples(self) -> int:
|
||||
return len(self.data)
|
||||
|
||||
@property
|
||||
def dt(self) -> float:
|
||||
"""Time step between samples."""
|
||||
if self.sample_rate > 0:
|
||||
return 1.0 / self.sample_rate
|
||||
if len(self.time) >= 2:
|
||||
return float(self.time[1] - self.time[0])
|
||||
return 0.0
|
||||
|
||||
@property
|
||||
def peak(self) -> float:
|
||||
"""Absolute peak value."""
|
||||
return float(np.max(np.abs(self.data)))
|
||||
|
||||
@property
|
||||
def peak_time(self) -> float:
|
||||
"""Time of the absolute peak value."""
|
||||
idx = int(np.argmax(np.abs(self.data)))
|
||||
return float(self.time[idx])
|
||||
|
||||
def value_at(self, t: float) -> float:
|
||||
"""Interpolated value at a specific time.
|
||||
|
||||
Uses linear interpolation between the two nearest samples.
|
||||
"""
|
||||
return float(np.interp(t, self.time, self.data))
|
||||
|
||||
def with_data(
|
||||
self,
|
||||
data: NDArray[np.floating[Any]],
|
||||
time: NDArray[np.floating[Any]] | None = None,
|
||||
*,
|
||||
unit: str | None = None,
|
||||
cfc_class: int | None = ..., # type: ignore[assignment]
|
||||
transform_note: str = "",
|
||||
) -> Channel:
|
||||
"""Create a new Channel with different data, preserving metadata.
|
||||
|
||||
This is the primary mechanism for non-destructive transforms.
|
||||
"""
|
||||
history = self.transform_history
|
||||
if transform_note:
|
||||
history = (*history, transform_note)
|
||||
|
||||
# Sentinel check for cfc_class (allow explicit None)
|
||||
new_cfc = self.cfc_class if cfc_class is ... else cfc_class
|
||||
|
||||
return Channel(
|
||||
name=self.name,
|
||||
code=self.code,
|
||||
data=data,
|
||||
time=time if time is not None else self.time,
|
||||
unit=unit if unit is not None else self.unit,
|
||||
sample_rate=self.sample_rate,
|
||||
cfc_class=new_cfc,
|
||||
metadata=self.metadata,
|
||||
source_test_id=self.source_test_id,
|
||||
transform_history=history,
|
||||
)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
transforms = (
|
||||
f" [{len(self.transform_history)} transforms]" if self.transform_history else ""
|
||||
)
|
||||
return f"Channel({self.name}, {self.n_samples} pts, {self.unit}{transforms})"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Channel grouping
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChannelGroup:
|
||||
"""A group of related channels (typically X/Y/Z components).
|
||||
|
||||
Enables one-call resultant computation and component access.
|
||||
"""
|
||||
|
||||
key: str
|
||||
x: Channel | None = None
|
||||
y: Channel | None = None
|
||||
z: Channel | None = None
|
||||
|
||||
def components(self) -> list[Channel]:
|
||||
"""Return all available component channels."""
|
||||
return [ch for ch in (self.x, self.y, self.z) if ch is not None]
|
||||
|
||||
def resultant(self) -> Channel:
|
||||
"""Compute the vector magnitude (resultant) from available components.
|
||||
|
||||
Returns a new Channel with direction='R' in its code.
|
||||
"""
|
||||
comps = self.components()
|
||||
if not comps:
|
||||
raise ValueError(f"Group {self.key} has no component channels")
|
||||
|
||||
# Sum of squares
|
||||
result = np.zeros_like(comps[0].data)
|
||||
for ch in comps:
|
||||
result += ch.data**2
|
||||
result = np.sqrt(result)
|
||||
|
||||
# Build a resultant channel code
|
||||
ref = comps[0]
|
||||
resultant_name = (
|
||||
ref.name[:12] + "R" + ref.name[13:] if len(ref.name) >= 13 else ref.name + "_R"
|
||||
)
|
||||
|
||||
resultant_code = ChannelCode(
|
||||
raw=resultant_name,
|
||||
test_object=ref.code.test_object,
|
||||
main_location=ref.code.main_location,
|
||||
fine_location=ref.code.fine_location,
|
||||
measurement=ref.code.measurement,
|
||||
direction="R",
|
||||
sense=ref.code.sense,
|
||||
filter_class=ref.code.filter_class,
|
||||
_valid=True,
|
||||
)
|
||||
|
||||
axis_labels = "".join(ch.code.direction for ch in comps)
|
||||
return Channel(
|
||||
name=resultant_name,
|
||||
code=resultant_code,
|
||||
data=result,
|
||||
time=ref.time,
|
||||
unit=ref.unit,
|
||||
sample_rate=ref.sample_rate,
|
||||
cfc_class=ref.cfc_class,
|
||||
metadata=ref.metadata,
|
||||
source_test_id=ref.source_test_id,
|
||||
transform_history=ref.transform_history + (f"resultant({axis_labels})",),
|
||||
)
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
"""Human-readable group description."""
|
||||
ref = next((ch for ch in (self.x, self.y, self.z) if ch is not None), None)
|
||||
if ref is None:
|
||||
return self.key
|
||||
return (
|
||||
f"{ref.code.test_object_label} {ref.code.location_label} — {ref.code.measurement_label}"
|
||||
)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
axes = []
|
||||
if self.x is not None:
|
||||
axes.append("X")
|
||||
if self.y is not None:
|
||||
axes.append("Y")
|
||||
if self.z is not None:
|
||||
axes.append("Z")
|
||||
return f"ChannelGroup({self.key}, [{'/'.join(axes)}])"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test data container
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestData:
|
||||
"""Container for all data from a single crash test.
|
||||
|
||||
Provides channel access, pattern-based search, and automatic grouping
|
||||
based on ISO naming conventions.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
test_id: str,
|
||||
metadata: TestMetadata,
|
||||
channels: dict[str, Channel],
|
||||
path: Path | None = None,
|
||||
) -> None:
|
||||
self.test_id = test_id
|
||||
self.metadata = metadata
|
||||
self._channels = dict(channels)
|
||||
self.path = path
|
||||
self._groups: dict[str, ChannelGroup] | None = None
|
||||
|
||||
@property
|
||||
def channels(self) -> dict[str, Channel]:
|
||||
"""All channels, keyed by name."""
|
||||
return dict(self._channels)
|
||||
|
||||
@property
|
||||
def channel_names(self) -> list[str]:
|
||||
"""Sorted list of channel names."""
|
||||
return sorted(self._channels.keys())
|
||||
|
||||
def get(self, name: str) -> Channel:
|
||||
"""Get a channel by exact name.
|
||||
|
||||
Raises KeyError if not found.
|
||||
"""
|
||||
if name in self._channels:
|
||||
return self._channels[name]
|
||||
# Try case-insensitive lookup
|
||||
for key, ch in self._channels.items():
|
||||
if key.upper() == name.upper():
|
||||
return ch
|
||||
raise KeyError(f"Channel '{name}' not found in test {self.test_id}")
|
||||
|
||||
def __getitem__(self, name: str) -> Channel:
|
||||
return self.get(name)
|
||||
|
||||
def find(self, pattern: str) -> list[Channel]:
|
||||
"""Find channels matching a glob-like pattern.
|
||||
|
||||
Supports ``*`` wildcards and ``{X,Y,Z}`` set notation.
|
||||
|
||||
Examples:
|
||||
``test.find("11HEAD0000AC*")``
|
||||
``test.find("**CHST****DC*A")``
|
||||
``test.find("11HEAD0000AC{X,Y,Z}A")``
|
||||
"""
|
||||
results = []
|
||||
for ch in self._channels.values():
|
||||
if ch.code.is_valid and ch.code.matches(pattern):
|
||||
results.append(ch)
|
||||
elif fnmatch.fnmatch(ch.name.upper(), pattern.upper()):
|
||||
results.append(ch)
|
||||
return sorted(results, key=lambda c: c.name)
|
||||
|
||||
def groups(self) -> dict[str, ChannelGroup]:
|
||||
"""Auto-group channels into X/Y/Z component groups.
|
||||
|
||||
Groups are cached after first computation. Channels are grouped
|
||||
by their ``group_key()`` — shared test object, location, measurement,
|
||||
and sense, differing only in direction (X/Y/Z).
|
||||
"""
|
||||
if self._groups is not None:
|
||||
return dict(self._groups)
|
||||
|
||||
group_map: dict[str, ChannelGroup] = {}
|
||||
temp: dict[str, dict[str, Channel]] = defaultdict(dict)
|
||||
|
||||
for ch in self._channels.values():
|
||||
if not ch.code.is_valid or not ch.code.is_component():
|
||||
continue
|
||||
gkey = ch.code.group_key()
|
||||
axis = ch.code.direction
|
||||
temp[gkey][axis] = ch
|
||||
|
||||
for gkey, axes in temp.items():
|
||||
group_map[gkey] = ChannelGroup(
|
||||
key=gkey,
|
||||
x=axes.get("X"),
|
||||
y=axes.get("Y"),
|
||||
z=axes.get("Z"),
|
||||
)
|
||||
|
||||
self._groups = group_map
|
||||
return dict(self._groups)
|
||||
|
||||
def group(self, pattern: str) -> ChannelGroup:
|
||||
"""Find a channel group matching a pattern.
|
||||
|
||||
The pattern should match the group key (without direction).
|
||||
|
||||
Example: ``test.group("11HEAD0000AC")`` finds the head acceleration group.
|
||||
"""
|
||||
all_groups = self.groups()
|
||||
|
||||
# Direct match
|
||||
if pattern in all_groups:
|
||||
return all_groups[pattern]
|
||||
|
||||
# Glob match on group keys
|
||||
for key, grp in all_groups.items():
|
||||
if fnmatch.fnmatch(key.upper(), f"*{pattern.upper()}*"):
|
||||
return grp
|
||||
|
||||
raise KeyError(f"No channel group matching '{pattern}' in test {self.test_id}")
|
||||
|
||||
def channel_tree(self) -> dict[str, dict[str, dict[str, list[Channel]]]]:
|
||||
"""Build a hierarchical tree of channels.
|
||||
|
||||
Returns: {test_object: {main_location: {measurement: [channels]}}}
|
||||
|
||||
Used by the web UI for the channel browser sidebar.
|
||||
"""
|
||||
tree: dict[str, dict[str, dict[str, list[Channel]]]] = {}
|
||||
for ch in self._channels.values():
|
||||
if not ch.code.is_valid:
|
||||
obj = "Other"
|
||||
loc = "Unknown"
|
||||
meas = "Unknown"
|
||||
else:
|
||||
obj = ch.code.test_object_label
|
||||
loc = ch.code.location_label
|
||||
meas = ch.code.measurement_label
|
||||
|
||||
tree.setdefault(obj, {}).setdefault(loc, {}).setdefault(meas, []).append(ch)
|
||||
|
||||
# Sort channels within each leaf
|
||||
for obj in tree.values():
|
||||
for loc in obj.values():
|
||||
for meas_key in loc:
|
||||
loc[meas_key] = sorted(loc[meas_key], key=lambda c: c.name)
|
||||
|
||||
return tree
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._channels)
|
||||
|
||||
def __iter__(self) -> Iterator[Channel]:
|
||||
return iter(self._channels.values())
|
||||
|
||||
def __contains__(self, name: str) -> bool:
|
||||
return name in self._channels or name.upper() in {k.upper() for k in self._channels}
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return (
|
||||
f"TestData({self.test_id!r}, {len(self._channels)} channels, "
|
||||
f"metadata={self.metadata.test_number!r})"
|
||||
)
|
||||
25
src/impakt/criteria/__init__.py
Normal file
25
src/impakt/criteria/__init__.py
Normal file
@@ -0,0 +1,25 @@
|
||||
"""Injury criteria calculation engine."""
|
||||
|
||||
from impakt.criteria.base import CriterionResult, InjuryCriterion
|
||||
from impakt.criteria.chest import chest_deflection, viscous_criterion
|
||||
from impakt.criteria.clip3ms import clip_3ms
|
||||
from impakt.criteria.femur import femur_load
|
||||
from impakt.criteria.hic import hic, hic15, hic36
|
||||
from impakt.criteria.nij import NijIntercepts, nij
|
||||
from impakt.criteria.tibia import TibiaIntercepts, tibia_index
|
||||
|
||||
__all__ = [
|
||||
"CriterionResult",
|
||||
"InjuryCriterion",
|
||||
"NijIntercepts",
|
||||
"TibiaIntercepts",
|
||||
"chest_deflection",
|
||||
"clip_3ms",
|
||||
"femur_load",
|
||||
"hic",
|
||||
"hic15",
|
||||
"hic36",
|
||||
"nij",
|
||||
"tibia_index",
|
||||
"viscous_criterion",
|
||||
]
|
||||
BIN
src/impakt/criteria/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/__init__.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/__init__.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/base.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/base.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/base.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/base.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/chest.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/chest.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/chest.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/chest.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/clip3ms.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/clip3ms.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/clip3ms.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/clip3ms.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/femur.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/femur.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/femur.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/femur.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/hic.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/hic.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/hic.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/hic.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/nij.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/nij.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/nij.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/nij.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/tibia.cpython-312.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/tibia.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/criteria/__pycache__/tibia.cpython-314.pyc
Normal file
BIN
src/impakt/criteria/__pycache__/tibia.cpython-314.pyc
Normal file
Binary file not shown.
58
src/impakt/criteria/base.py
Normal file
58
src/impakt/criteria/base.py
Normal file
@@ -0,0 +1,58 @@
|
||||
"""Base types for injury criteria calculations."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Protocol, runtime_checkable
|
||||
|
||||
from impakt.channel.model import Channel, DummyInfo
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class CriterionResult:
|
||||
"""Result of an injury criterion computation.
|
||||
|
||||
Attributes:
|
||||
criterion: Name of the criterion (e.g., 'HIC15').
|
||||
value: Computed value.
|
||||
unit: Unit of the value (dimensionless for indices).
|
||||
time_of_peak: Time at which the peak/critical value occurs.
|
||||
window: (t1, t2) time window for window-based criteria (HIC).
|
||||
body_region: Body region this criterion applies to.
|
||||
details: Additional computation details.
|
||||
"""
|
||||
|
||||
criterion: str
|
||||
value: float
|
||||
unit: str = ""
|
||||
time_of_peak: float | None = None
|
||||
window: tuple[float, float] | None = None
|
||||
body_region: str = ""
|
||||
details: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
unit_str = f" {self.unit}" if self.unit else ""
|
||||
return f"CriterionResult({self.criterion}={self.value:.2f}{unit_str})"
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class InjuryCriterion(Protocol):
|
||||
"""Protocol for injury criteria calculators."""
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
"""Criterion name (e.g., 'HIC15', 'Nij')."""
|
||||
...
|
||||
|
||||
@property
|
||||
def required_channels(self) -> list[str]:
|
||||
"""Channel patterns required for computation."""
|
||||
...
|
||||
|
||||
def compute(
|
||||
self,
|
||||
channels: dict[str, Channel],
|
||||
dummy: DummyInfo | None = None,
|
||||
) -> CriterionResult:
|
||||
"""Compute the criterion from the given channels."""
|
||||
...
|
||||
153
src/impakt/criteria/chest.py
Normal file
153
src/impakt/criteria/chest.py
Normal file
@@ -0,0 +1,153 @@
|
||||
"""Chest injury criteria: deflection and viscous criterion.
|
||||
|
||||
Chest Deflection: Peak sternal displacement (mm).
|
||||
Viscous Criterion (VC): V(t) * C(t) where V = deflection velocity, C = compression ratio.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
from impakt.channel.model import Channel, DummyInfo
|
||||
from impakt.criteria.base import CriterionResult
|
||||
|
||||
|
||||
# Initial chest depth by dummy type (mm)
|
||||
CHEST_DEPTH: dict[str, float] = {
|
||||
"H3-50M": 229.0,
|
||||
"H3-05F": 187.0,
|
||||
"H3-95M": 254.0,
|
||||
"THOR-50M": 229.0,
|
||||
"ES-2re": 140.0, # Approximate rib depth
|
||||
"WSID-50": 140.0,
|
||||
}
|
||||
|
||||
DEFAULT_CHEST_DEPTH = 229.0 # mm, Hybrid III 50th Male
|
||||
|
||||
|
||||
def chest_deflection(
|
||||
channel: Channel | None = None,
|
||||
channels: dict[str, Channel] | None = None,
|
||||
dummy: DummyInfo | None = None,
|
||||
) -> CriterionResult:
|
||||
"""Compute peak chest deflection.
|
||||
|
||||
For Hybrid III: single-point sternal deflection.
|
||||
For THOR: may involve multiple IR-TRACC channels (returns max across all).
|
||||
|
||||
Args:
|
||||
channel: Chest deflection Channel (mm).
|
||||
channels: Alternative dict (uses first deflection channel found).
|
||||
dummy: Dummy info (for future multi-point THOR handling).
|
||||
|
||||
Returns:
|
||||
CriterionResult with peak deflection value.
|
||||
"""
|
||||
if channels is not None and channel is None:
|
||||
# Use the first channel or the one with the highest peak
|
||||
ch_list = list(channels.values())
|
||||
channel = max(ch_list, key=lambda c: np.max(np.abs(c.data)))
|
||||
|
||||
if channel is None:
|
||||
raise ValueError("A chest deflection channel is required")
|
||||
|
||||
deflection = channel.data.copy()
|
||||
|
||||
# Convert units if needed
|
||||
if channel.unit in ("m",):
|
||||
deflection = deflection * 1000.0 # m -> mm
|
||||
|
||||
peak_value = float(np.max(np.abs(deflection)))
|
||||
peak_idx = int(np.argmax(np.abs(deflection)))
|
||||
peak_time = float(channel.time[peak_idx])
|
||||
|
||||
return CriterionResult(
|
||||
criterion="Chest Deflection",
|
||||
value=peak_value,
|
||||
unit="mm",
|
||||
time_of_peak=peak_time,
|
||||
body_region="Chest",
|
||||
details={
|
||||
"signed_peak": float(deflection[peak_idx]),
|
||||
"input_channel": channel.name,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def viscous_criterion(
|
||||
channel: Channel | None = None,
|
||||
channels: dict[str, Channel] | None = None,
|
||||
dummy: DummyInfo | None = None,
|
||||
chest_depth_mm: float | None = None,
|
||||
) -> CriterionResult:
|
||||
"""Compute the Viscous Criterion (VC).
|
||||
|
||||
VC = V(t) * C(t)
|
||||
where:
|
||||
V(t) = d[D(t)]/dt (velocity of chest deflection, m/s)
|
||||
C(t) = D(t) / D0 (instantaneous compression ratio)
|
||||
D0 = initial chest depth (mm)
|
||||
|
||||
Args:
|
||||
channel: Chest deflection time-history (mm).
|
||||
channels: Alternative dict.
|
||||
dummy: Dummy info for chest depth lookup.
|
||||
chest_depth_mm: Override initial chest depth.
|
||||
|
||||
Returns:
|
||||
CriterionResult with max VC value.
|
||||
"""
|
||||
if channels is not None and channel is None:
|
||||
ch_list = list(channels.values())
|
||||
channel = max(ch_list, key=lambda c: np.max(np.abs(c.data)))
|
||||
|
||||
if channel is None:
|
||||
raise ValueError("A chest deflection channel is required")
|
||||
|
||||
# Get chest depth
|
||||
d0 = chest_depth_mm
|
||||
if d0 is None and dummy is not None:
|
||||
for key, depth in CHEST_DEPTH.items():
|
||||
if key.lower() in dummy.dummy_type.lower():
|
||||
d0 = depth
|
||||
break
|
||||
if d0 is None:
|
||||
d0 = DEFAULT_CHEST_DEPTH
|
||||
|
||||
deflection_mm = channel.data.copy()
|
||||
if channel.unit in ("m",):
|
||||
deflection_mm = deflection_mm * 1000.0
|
||||
|
||||
dt = channel.dt
|
||||
if dt <= 0:
|
||||
raise ValueError("Channel has no timing information (dt=0)")
|
||||
|
||||
# Velocity in m/s (deflection is in mm, so divide by 1000)
|
||||
velocity = np.gradient(deflection_mm / 1000.0, dt)
|
||||
|
||||
# Compression ratio (dimensionless)
|
||||
compression = deflection_mm / d0
|
||||
|
||||
# VC in m/s
|
||||
vc = velocity * compression
|
||||
|
||||
peak_vc = float(np.max(np.abs(vc)))
|
||||
peak_idx = int(np.argmax(np.abs(vc)))
|
||||
peak_time = float(channel.time[peak_idx])
|
||||
|
||||
return CriterionResult(
|
||||
criterion="Viscous Criterion",
|
||||
value=peak_vc,
|
||||
unit="m/s",
|
||||
time_of_peak=peak_time,
|
||||
body_region="Chest",
|
||||
details={
|
||||
"chest_depth_mm": d0,
|
||||
"max_velocity_m_s": float(np.max(np.abs(velocity))),
|
||||
"max_compression": float(np.max(np.abs(compression))),
|
||||
"signed_peak_vc": float(vc[peak_idx]),
|
||||
"input_channel": channel.name,
|
||||
},
|
||||
)
|
||||
92
src/impakt/criteria/clip3ms.py
Normal file
92
src/impakt/criteria/clip3ms.py
Normal file
@@ -0,0 +1,92 @@
|
||||
"""3ms Clip (Chest Acceleration) criterion.
|
||||
|
||||
The 3ms clip is the maximum acceleration value sustained for a cumulative
|
||||
duration of 3 milliseconds. It represents the highest acceleration level
|
||||
where the total time the signal exceeds that level equals at least 3 ms.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
from impakt.channel.model import Channel, ChannelGroup, DummyInfo
|
||||
from impakt.criteria.base import CriterionResult
|
||||
|
||||
|
||||
def clip_3ms(
|
||||
channel_or_group: Channel | ChannelGroup,
|
||||
clip_duration_ms: float = 3.0,
|
||||
channels: dict[str, Channel] | None = None,
|
||||
dummy: DummyInfo | None = None,
|
||||
) -> CriterionResult:
|
||||
"""Compute the 3ms clip value.
|
||||
|
||||
The 3ms clip is found by determining the highest acceleration level A
|
||||
such that the cumulative time the signal exceeds A is >= 3 ms.
|
||||
|
||||
Args:
|
||||
channel_or_group: Resultant acceleration Channel (in g's) or
|
||||
ChannelGroup with X/Y/Z components.
|
||||
clip_duration_ms: Clip duration in milliseconds (default 3.0).
|
||||
channels: Alternative channel dict.
|
||||
dummy: Dummy info (unused).
|
||||
|
||||
Returns:
|
||||
CriterionResult with the 3ms clip value.
|
||||
"""
|
||||
# Get resultant
|
||||
if isinstance(channel_or_group, ChannelGroup):
|
||||
resultant = channel_or_group.resultant()
|
||||
elif isinstance(channel_or_group, Channel):
|
||||
resultant = channel_or_group
|
||||
elif channels is not None:
|
||||
from impakt.transform.resultant import resultant_from_channels
|
||||
|
||||
resultant = resultant_from_channels(*channels.values())
|
||||
else:
|
||||
raise TypeError("Provide a Channel, ChannelGroup, or channels dict")
|
||||
|
||||
accel = np.abs(resultant.data)
|
||||
|
||||
# Convert to g if needed
|
||||
if resultant.unit in ("m/s²", "m/s^2", "m/s2"):
|
||||
accel = accel / 9.80665
|
||||
|
||||
dt = resultant.dt
|
||||
clip_duration_s = clip_duration_ms / 1000.0
|
||||
clip_samples = clip_duration_s / dt if dt > 0 else 0
|
||||
|
||||
# Sort acceleration values in descending order
|
||||
sorted_accel = np.sort(accel)[::-1]
|
||||
|
||||
# Find the value where cumulative exceedance time >= clip_duration
|
||||
# For each threshold level (working downward), count how many samples exceed it
|
||||
# The 3ms clip = value at index where we first accumulate enough samples
|
||||
if clip_samples <= 1:
|
||||
clip_value = float(sorted_accel[0])
|
||||
else:
|
||||
# The nth largest value, where n = ceil(clip_samples)
|
||||
n = int(np.ceil(clip_samples))
|
||||
if n >= len(sorted_accel):
|
||||
clip_value = float(sorted_accel[-1])
|
||||
else:
|
||||
clip_value = float(sorted_accel[n - 1])
|
||||
|
||||
# Find time of peak
|
||||
peak_idx = int(np.argmax(accel))
|
||||
peak_time = float(resultant.time[peak_idx])
|
||||
|
||||
return CriterionResult(
|
||||
criterion="3ms Clip",
|
||||
value=clip_value,
|
||||
unit="g",
|
||||
time_of_peak=peak_time,
|
||||
body_region="Chest",
|
||||
details={
|
||||
"clip_duration_ms": clip_duration_ms,
|
||||
"peak_accel_g": float(np.max(accel)),
|
||||
"input_channel": resultant.name,
|
||||
},
|
||||
)
|
||||
107
src/impakt/criteria/femur.py
Normal file
107
src/impakt/criteria/femur.py
Normal file
@@ -0,0 +1,107 @@
|
||||
"""Femur load criterion.
|
||||
|
||||
Peak compressive axial force measured at the femur load cells.
|
||||
Evaluated separately for left and right femur.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
from impakt.channel.model import Channel, DummyInfo
|
||||
from impakt.criteria.base import CriterionResult
|
||||
|
||||
|
||||
def femur_load(
|
||||
channel: Channel | None = None,
|
||||
channels: dict[str, Channel] | None = None,
|
||||
side: str = "both",
|
||||
dummy: DummyInfo | None = None,
|
||||
) -> CriterionResult | list[CriterionResult]:
|
||||
"""Compute peak femur axial load.
|
||||
|
||||
Args:
|
||||
channel: Single femur force channel (N or kN).
|
||||
channels: Dict of channels (will auto-detect left/right from names).
|
||||
side: 'left', 'right', or 'both'.
|
||||
dummy: Dummy info (unused currently).
|
||||
|
||||
Returns:
|
||||
CriterionResult for a single side, or list of two results if 'both'.
|
||||
"""
|
||||
if side == "both" and channels is not None:
|
||||
results = []
|
||||
left_ch = None
|
||||
right_ch = None
|
||||
|
||||
for name, ch in channels.items():
|
||||
name_upper = name.upper()
|
||||
if "LE" in name_upper or "LEFT" in name_upper or "L" == name_upper:
|
||||
left_ch = ch
|
||||
elif "RI" in name_upper or "RIGHT" in name_upper or "R" == name_upper:
|
||||
right_ch = ch
|
||||
|
||||
if left_ch is not None:
|
||||
results.append(_compute_single(left_ch, "Left"))
|
||||
if right_ch is not None:
|
||||
results.append(_compute_single(right_ch, "Right"))
|
||||
|
||||
if not results:
|
||||
# Just compute for whatever channels we have
|
||||
for i, ch in enumerate(channels.values()):
|
||||
results.append(_compute_single(ch, f"#{i + 1}"))
|
||||
|
||||
return results if len(results) > 1 else results[0] if results else _empty_result()
|
||||
|
||||
if channel is None and channels is not None:
|
||||
channel = next(iter(channels.values()))
|
||||
|
||||
if channel is None:
|
||||
raise ValueError("A femur load channel is required")
|
||||
|
||||
side_label = side.capitalize() if side != "both" else ""
|
||||
return _compute_single(channel, side_label)
|
||||
|
||||
|
||||
def _compute_single(channel: Channel, side_label: str) -> CriterionResult:
|
||||
"""Compute femur load for a single channel."""
|
||||
force = channel.data.copy()
|
||||
|
||||
# Convert to N if in kN
|
||||
if channel.unit in ("kN",):
|
||||
force = force * 1000.0
|
||||
|
||||
# Femur load criterion uses compressive force (typically negative in SAE convention)
|
||||
# Report as absolute peak compressive
|
||||
peak_compressive = float(np.max(np.abs(force)))
|
||||
|
||||
# But also track the signed peak for detail
|
||||
peak_idx = int(np.argmax(np.abs(force)))
|
||||
peak_time = float(channel.time[peak_idx])
|
||||
|
||||
label = f"Femur Load {side_label}".strip()
|
||||
|
||||
return CriterionResult(
|
||||
criterion=label,
|
||||
value=peak_compressive / 1000.0, # Report in kN
|
||||
unit="kN",
|
||||
time_of_peak=peak_time,
|
||||
body_region=f"Femur {side_label}".strip(),
|
||||
details={
|
||||
"peak_force_N": peak_compressive,
|
||||
"signed_peak_N": float(force[peak_idx]),
|
||||
"input_channel": channel.name,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def _empty_result() -> CriterionResult:
|
||||
return CriterionResult(
|
||||
criterion="Femur Load",
|
||||
value=0.0,
|
||||
unit="kN",
|
||||
body_region="Femur",
|
||||
details={"error": "No femur channel provided"},
|
||||
)
|
||||
149
src/impakt/criteria/hic.py
Normal file
149
src/impakt/criteria/hic.py
Normal file
@@ -0,0 +1,149 @@
|
||||
"""Head Injury Criterion (HIC) calculation.
|
||||
|
||||
HIC = max over (t1, t2) { (t2-t1) * [ 1/(t2-t1) * integral(t1,t2) a(t) dt ]^2.5 }
|
||||
|
||||
where a(t) is the resultant head acceleration in g's.
|
||||
|
||||
HIC15: max window = 15 ms
|
||||
HIC36: max window = 36 ms
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from impakt.channel.model import Channel, ChannelGroup, DummyInfo
|
||||
from impakt.criteria.base import CriterionResult
|
||||
|
||||
|
||||
def _compute_hic(
|
||||
accel: NDArray[np.floating[Any]],
|
||||
time: NDArray[np.floating[Any]],
|
||||
max_window_s: float,
|
||||
) -> tuple[float, float, float]:
|
||||
"""Core HIC computation using cumulative integration.
|
||||
|
||||
Returns:
|
||||
(hic_value, t1, t2) — the HIC value and the optimal window.
|
||||
"""
|
||||
n = len(accel)
|
||||
if n < 2:
|
||||
return 0.0, 0.0, 0.0
|
||||
|
||||
dt = np.diff(time)
|
||||
# Cumulative integral using trapezoidal rule
|
||||
cum_integral = np.zeros(n)
|
||||
cum_integral[1:] = np.cumsum(0.5 * (accel[:-1] + accel[1:]) * dt)
|
||||
|
||||
best_hic = 0.0
|
||||
best_t1 = 0.0
|
||||
best_t2 = 0.0
|
||||
|
||||
# Sliding window search
|
||||
j_start = 0
|
||||
for i in range(n - 1):
|
||||
t_i = time[i]
|
||||
|
||||
# Advance j_start so we search forward from i
|
||||
if j_start <= i:
|
||||
j_start = i + 1
|
||||
|
||||
for j in range(j_start, n):
|
||||
dt_window = time[j] - t_i
|
||||
if dt_window <= 0:
|
||||
continue
|
||||
if dt_window > max_window_s:
|
||||
break
|
||||
|
||||
avg_accel = (cum_integral[j] - cum_integral[i]) / dt_window
|
||||
hic_val = dt_window * (abs(avg_accel) ** 2.5)
|
||||
|
||||
if hic_val > best_hic:
|
||||
best_hic = hic_val
|
||||
best_t1 = t_i
|
||||
best_t2 = time[j]
|
||||
|
||||
return best_hic, float(best_t1), float(best_t2)
|
||||
|
||||
|
||||
def hic(
|
||||
channel_or_group: Channel | ChannelGroup,
|
||||
window_ms: int = 15,
|
||||
channels: dict[str, Channel] | None = None,
|
||||
dummy: DummyInfo | None = None,
|
||||
) -> CriterionResult:
|
||||
"""Compute HIC (Head Injury Criterion).
|
||||
|
||||
Args:
|
||||
channel_or_group: Either a resultant acceleration Channel (in g's),
|
||||
or a ChannelGroup with X/Y/Z head acceleration components.
|
||||
window_ms: Maximum window in milliseconds (15 or 36).
|
||||
channels: Alternative: dict of named channels.
|
||||
dummy: Dummy info (not used for HIC, but part of the protocol).
|
||||
|
||||
Returns:
|
||||
CriterionResult with the HIC value.
|
||||
"""
|
||||
if window_ms not in (15, 36):
|
||||
raise ValueError(f"HIC window must be 15 or 36 ms, got {window_ms}")
|
||||
|
||||
# Get the resultant acceleration
|
||||
if isinstance(channel_or_group, ChannelGroup):
|
||||
resultant = channel_or_group.resultant()
|
||||
elif isinstance(channel_or_group, Channel):
|
||||
resultant = channel_or_group
|
||||
elif channels is not None:
|
||||
# Try to compute resultant from dict
|
||||
comps = list(channels.values())
|
||||
if len(comps) == 1:
|
||||
resultant = comps[0]
|
||||
else:
|
||||
from impakt.transform.resultant import resultant_from_channels
|
||||
|
||||
resultant = resultant_from_channels(*comps)
|
||||
else:
|
||||
raise TypeError("Provide either a Channel (resultant), ChannelGroup, or channels dict")
|
||||
|
||||
# Ensure data is in g's (basic check — if unit is m/s², convert)
|
||||
accel = resultant.data.copy()
|
||||
if resultant.unit in ("m/s²", "m/s^2", "m/s2"):
|
||||
accel = accel / 9.80665 # Convert to g
|
||||
|
||||
max_window_s = window_ms / 1000.0
|
||||
hic_value, t1, t2 = _compute_hic(accel, resultant.time, max_window_s)
|
||||
|
||||
return CriterionResult(
|
||||
criterion=f"HIC{window_ms}",
|
||||
value=hic_value,
|
||||
unit="",
|
||||
time_of_peak=(t1 + t2) / 2.0,
|
||||
window=(t1, t2),
|
||||
body_region="Head",
|
||||
details={
|
||||
"window_ms": window_ms,
|
||||
"t1": t1,
|
||||
"t2": t2,
|
||||
"window_duration_ms": (t2 - t1) * 1000,
|
||||
"input_unit": resultant.unit,
|
||||
"input_channel": resultant.name,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def hic15(
|
||||
channel_or_group: Channel | ChannelGroup,
|
||||
**kwargs: Any,
|
||||
) -> CriterionResult:
|
||||
"""Convenience: compute HIC15."""
|
||||
return hic(channel_or_group, window_ms=15, **kwargs)
|
||||
|
||||
|
||||
def hic36(
|
||||
channel_or_group: Channel | ChannelGroup,
|
||||
**kwargs: Any,
|
||||
) -> CriterionResult:
|
||||
"""Convenience: compute HIC36."""
|
||||
return hic(channel_or_group, window_ms=36, **kwargs)
|
||||
213
src/impakt/criteria/nij.py
Normal file
213
src/impakt/criteria/nij.py
Normal file
@@ -0,0 +1,213 @@
|
||||
"""Neck Injury Criterion (Nij) calculation.
|
||||
|
||||
Nij = Fz/Fzc + My/Myc
|
||||
|
||||
Four loading modes:
|
||||
NTE: Tension + Extension
|
||||
NTF: Tension + Flexion
|
||||
NCE: Compression + Extension
|
||||
NCF: Compression + Flexion
|
||||
|
||||
The reported Nij is the maximum across all four modes and all time steps.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
from impakt.channel.model import Channel, DummyInfo
|
||||
from impakt.criteria.base import CriterionResult
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class NijIntercepts:
|
||||
"""Critical intercept values for Nij calculation.
|
||||
|
||||
These are dummy-specific and define the denominator terms.
|
||||
"""
|
||||
|
||||
fzc_tension: float # N (positive Fz = tension)
|
||||
fzc_compression: float # N (negative Fz = compression)
|
||||
myc_flexion: float # N·m (positive My = flexion)
|
||||
myc_extension: float # N·m (negative My = extension)
|
||||
|
||||
|
||||
# Standard intercepts by dummy type
|
||||
NIJ_INTERCEPTS: dict[str, NijIntercepts] = {
|
||||
"H3-50M": NijIntercepts(
|
||||
fzc_tension=6806.0,
|
||||
fzc_compression=6160.0,
|
||||
myc_flexion=310.0,
|
||||
myc_extension=135.0,
|
||||
),
|
||||
"Hybrid III 50th Percentile Male": NijIntercepts(
|
||||
fzc_tension=6806.0,
|
||||
fzc_compression=6160.0,
|
||||
myc_flexion=310.0,
|
||||
myc_extension=135.0,
|
||||
),
|
||||
"H3-05F": NijIntercepts(
|
||||
fzc_tension=4287.0,
|
||||
fzc_compression=3880.0,
|
||||
myc_flexion=155.0,
|
||||
myc_extension=67.0,
|
||||
),
|
||||
"Hybrid III 5th Percentile Female": NijIntercepts(
|
||||
fzc_tension=4287.0,
|
||||
fzc_compression=3880.0,
|
||||
myc_flexion=155.0,
|
||||
myc_extension=67.0,
|
||||
),
|
||||
"H3-95M": NijIntercepts(
|
||||
fzc_tension=8216.0,
|
||||
fzc_compression=7440.0,
|
||||
myc_flexion=415.0,
|
||||
myc_extension=179.0,
|
||||
),
|
||||
"H3-6YO": NijIntercepts(
|
||||
fzc_tension=2800.0,
|
||||
fzc_compression=2800.0,
|
||||
myc_flexion=93.0,
|
||||
myc_extension=37.0,
|
||||
),
|
||||
"H3-3YO": NijIntercepts(
|
||||
fzc_tension=2120.0,
|
||||
fzc_compression=2120.0,
|
||||
myc_flexion=68.0,
|
||||
myc_extension=27.0,
|
||||
),
|
||||
"H3-10YO": NijIntercepts(
|
||||
fzc_tension=3500.0,
|
||||
fzc_compression=3500.0,
|
||||
myc_flexion=125.0,
|
||||
myc_extension=50.0,
|
||||
),
|
||||
}
|
||||
|
||||
# Default to Hybrid III 50th Male
|
||||
DEFAULT_INTERCEPTS = NIJ_INTERCEPTS["H3-50M"]
|
||||
|
||||
|
||||
def _get_intercepts(dummy: DummyInfo | None) -> NijIntercepts:
|
||||
"""Look up Nij intercepts for a given dummy."""
|
||||
if dummy is None:
|
||||
return DEFAULT_INTERCEPTS
|
||||
|
||||
# Try direct match
|
||||
for key, intercepts in NIJ_INTERCEPTS.items():
|
||||
if key.lower() in dummy.dummy_type.lower():
|
||||
return intercepts
|
||||
|
||||
# Heuristic matching
|
||||
dtype = dummy.dummy_type.upper()
|
||||
if "50" in dtype and ("MALE" in dtype or "50M" in dtype or "H3" in dtype):
|
||||
return NIJ_INTERCEPTS["H3-50M"]
|
||||
elif "5" in dtype and ("FEM" in dtype or "05F" in dtype or "5F" in dtype):
|
||||
return NIJ_INTERCEPTS["H3-05F"]
|
||||
elif "95" in dtype:
|
||||
return NIJ_INTERCEPTS["H3-95M"]
|
||||
|
||||
return DEFAULT_INTERCEPTS
|
||||
|
||||
|
||||
def nij(
|
||||
fz_channel: Channel | None = None,
|
||||
my_channel: Channel | None = None,
|
||||
channels: dict[str, Channel] | None = None,
|
||||
dummy: DummyInfo | None = None,
|
||||
intercepts: NijIntercepts | None = None,
|
||||
) -> CriterionResult:
|
||||
"""Compute the Neck Injury Criterion (Nij).
|
||||
|
||||
Args:
|
||||
fz_channel: Upper neck axial force channel (N). Positive = tension.
|
||||
my_channel: Upper neck sagittal moment channel (N·m). Positive = flexion.
|
||||
channels: Alternative: dict with 'fz' and 'my' keys.
|
||||
dummy: Dummy info for intercept lookup.
|
||||
intercepts: Override intercept values directly.
|
||||
|
||||
Returns:
|
||||
CriterionResult with the maximum Nij value.
|
||||
"""
|
||||
# Resolve channels
|
||||
if channels is not None:
|
||||
fz_channel = channels.get("fz") or channels.get("Fz") or channels.get("FZ")
|
||||
my_channel = channels.get("my") or channels.get("My") or channels.get("MY")
|
||||
|
||||
if fz_channel is None or my_channel is None:
|
||||
raise ValueError("Both Fz (axial force) and My (moment) channels are required")
|
||||
|
||||
# Get intercepts
|
||||
ints = intercepts or _get_intercepts(dummy)
|
||||
|
||||
fz = fz_channel.data.copy()
|
||||
my = my_channel.data.copy()
|
||||
|
||||
# Convert units if needed (expect N and N·m)
|
||||
if fz_channel.unit in ("kN",):
|
||||
fz = fz * 1000.0
|
||||
if my_channel.unit in ("kN·m", "kNm"):
|
||||
my = my * 1000.0
|
||||
|
||||
n = min(len(fz), len(my))
|
||||
fz = fz[:n]
|
||||
my = my[:n]
|
||||
|
||||
# Compute all four Nij modes at each time step
|
||||
# NTE: tension (Fz > 0) + extension (My < 0)
|
||||
# NTF: tension (Fz > 0) + flexion (My > 0)
|
||||
# NCE: compression (Fz < 0) + extension (My < 0)
|
||||
# NCF: compression (Fz < 0) + flexion (My > 0)
|
||||
|
||||
fz_tension = np.maximum(fz, 0.0) / ints.fzc_tension
|
||||
fz_compression = np.maximum(-fz, 0.0) / ints.fzc_compression
|
||||
my_flexion = np.maximum(my, 0.0) / ints.myc_flexion
|
||||
my_extension = np.maximum(-my, 0.0) / ints.myc_extension
|
||||
|
||||
nte = fz_tension + my_extension
|
||||
ntf = fz_tension + my_flexion
|
||||
nce = fz_compression + my_extension
|
||||
ncf = fz_compression + my_flexion
|
||||
|
||||
# Find maximum across all modes and time
|
||||
modes = {"NTE": nte, "NTF": ntf, "NCE": nce, "NCF": ncf}
|
||||
max_nij = 0.0
|
||||
max_mode = ""
|
||||
max_idx = 0
|
||||
|
||||
for mode_name, mode_values in modes.items():
|
||||
idx = int(np.argmax(mode_values))
|
||||
val = float(mode_values[idx])
|
||||
if val > max_nij:
|
||||
max_nij = val
|
||||
max_mode = mode_name
|
||||
max_idx = idx
|
||||
|
||||
time_arr = fz_channel.time[:n]
|
||||
peak_time = float(time_arr[max_idx])
|
||||
|
||||
return CriterionResult(
|
||||
criterion="Nij",
|
||||
value=max_nij,
|
||||
unit="",
|
||||
time_of_peak=peak_time,
|
||||
body_region="Neck",
|
||||
details={
|
||||
"mode": max_mode,
|
||||
"NTE_max": float(np.max(nte)),
|
||||
"NTF_max": float(np.max(ntf)),
|
||||
"NCE_max": float(np.max(nce)),
|
||||
"NCF_max": float(np.max(ncf)),
|
||||
"intercepts": {
|
||||
"fzc_tension": ints.fzc_tension,
|
||||
"fzc_compression": ints.fzc_compression,
|
||||
"myc_flexion": ints.myc_flexion,
|
||||
"myc_extension": ints.myc_extension,
|
||||
},
|
||||
"fz_peak_N": float(np.max(np.abs(fz))),
|
||||
"my_peak_Nm": float(np.max(np.abs(my))),
|
||||
},
|
||||
)
|
||||
133
src/impakt/criteria/tibia.py
Normal file
133
src/impakt/criteria/tibia.py
Normal file
@@ -0,0 +1,133 @@
|
||||
"""Tibia Index (TI) calculation.
|
||||
|
||||
TI = |M| / Mc + |F| / Fc
|
||||
|
||||
where:
|
||||
M = resultant bending moment = sqrt(Mx^2 + My^2)
|
||||
F = axial compressive force
|
||||
Mc = critical bending moment (225 N·m for H3 50th)
|
||||
Fc = critical compressive force (35.9 kN for H3 50th)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
from impakt.channel.model import Channel, DummyInfo
|
||||
from impakt.criteria.base import CriterionResult
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TibiaIntercepts:
|
||||
"""Critical intercept values for Tibia Index."""
|
||||
|
||||
mc: float # Critical bending moment (N·m)
|
||||
fc: float # Critical compressive force (N)
|
||||
|
||||
|
||||
TIBIA_INTERCEPTS: dict[str, TibiaIntercepts] = {
|
||||
"H3-50M": TibiaIntercepts(mc=225.0, fc=35900.0),
|
||||
"H3-05F": TibiaIntercepts(mc=140.0, fc=22400.0),
|
||||
"H3-95M": TibiaIntercepts(mc=307.0, fc=48900.0),
|
||||
}
|
||||
|
||||
DEFAULT_TIBIA_INTERCEPTS = TIBIA_INTERCEPTS["H3-50M"]
|
||||
|
||||
|
||||
def tibia_index(
|
||||
fz_channel: Channel | None = None,
|
||||
mx_channel: Channel | None = None,
|
||||
my_channel: Channel | None = None,
|
||||
channels: dict[str, Channel] | None = None,
|
||||
dummy: DummyInfo | None = None,
|
||||
intercepts: TibiaIntercepts | None = None,
|
||||
location: str = "",
|
||||
) -> CriterionResult:
|
||||
"""Compute Tibia Index.
|
||||
|
||||
Args:
|
||||
fz_channel: Tibia axial force (N).
|
||||
mx_channel: Tibia bending moment about X (N·m).
|
||||
my_channel: Tibia bending moment about Y (N·m).
|
||||
channels: Dict with 'fz', 'mx', 'my' keys.
|
||||
dummy: Dummy info for intercept lookup.
|
||||
intercepts: Override intercept values.
|
||||
location: Location label (e.g., 'Left Upper', 'Right Lower').
|
||||
|
||||
Returns:
|
||||
CriterionResult with TI value.
|
||||
"""
|
||||
if channels is not None:
|
||||
fz_channel = fz_channel or channels.get("fz") or channels.get("Fz")
|
||||
mx_channel = mx_channel or channels.get("mx") or channels.get("Mx")
|
||||
my_channel = my_channel or channels.get("my") or channels.get("My")
|
||||
|
||||
if fz_channel is None:
|
||||
raise ValueError("Tibia axial force (Fz) channel is required")
|
||||
|
||||
# Get intercepts
|
||||
ints = intercepts
|
||||
if ints is None and dummy is not None:
|
||||
for key, ti_ints in TIBIA_INTERCEPTS.items():
|
||||
if key.lower() in dummy.dummy_type.lower():
|
||||
ints = ti_ints
|
||||
break
|
||||
if ints is None:
|
||||
ints = DEFAULT_TIBIA_INTERCEPTS
|
||||
|
||||
fz = fz_channel.data.copy()
|
||||
if fz_channel.unit in ("kN",):
|
||||
fz = fz * 1000.0
|
||||
|
||||
n = len(fz)
|
||||
|
||||
# Compute resultant bending moment
|
||||
if mx_channel is not None and my_channel is not None:
|
||||
mx = mx_channel.data[:n].copy()
|
||||
my = my_channel.data[:n].copy()
|
||||
if mx_channel.unit in ("kN·m", "kNm"):
|
||||
mx = mx * 1000.0
|
||||
if my_channel.unit in ("kN·m", "kNm"):
|
||||
my = my * 1000.0
|
||||
m_resultant = np.sqrt(mx**2 + my**2)
|
||||
elif mx_channel is not None:
|
||||
mx = mx_channel.data[:n].copy()
|
||||
if mx_channel.unit in ("kN·m", "kNm"):
|
||||
mx = mx * 1000.0
|
||||
m_resultant = np.abs(mx)
|
||||
elif my_channel is not None:
|
||||
my = my_channel.data[:n].copy()
|
||||
if my_channel.unit in ("kN·m", "kNm"):
|
||||
my = my * 1000.0
|
||||
m_resultant = np.abs(my)
|
||||
else:
|
||||
# Only axial force component
|
||||
m_resultant = np.zeros(n)
|
||||
|
||||
# TI = |M|/Mc + |F|/Fc
|
||||
ti = m_resultant / ints.mc + np.abs(fz) / ints.fc
|
||||
|
||||
peak_ti = float(np.max(ti))
|
||||
peak_idx = int(np.argmax(ti))
|
||||
peak_time = float(fz_channel.time[peak_idx])
|
||||
|
||||
loc_label = f" ({location})" if location else ""
|
||||
|
||||
return CriterionResult(
|
||||
criterion=f"Tibia Index{loc_label}",
|
||||
value=peak_ti,
|
||||
unit="",
|
||||
time_of_peak=peak_time,
|
||||
body_region=f"Tibia{loc_label}",
|
||||
details={
|
||||
"location": location,
|
||||
"mc": ints.mc,
|
||||
"fc": ints.fc,
|
||||
"peak_moment_Nm": float(np.max(m_resultant)),
|
||||
"peak_force_N": float(np.max(np.abs(fz))),
|
||||
"peak_force_kN": float(np.max(np.abs(fz))) / 1000.0,
|
||||
},
|
||||
)
|
||||
1
src/impakt/io/__init__.py
Normal file
1
src/impakt/io/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Data I/O readers for crash test formats."""
|
||||
BIN
src/impakt/io/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
src/impakt/io/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/io/__pycache__/__init__.cpython-314.pyc
Normal file
BIN
src/impakt/io/__pycache__/__init__.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/io/__pycache__/mme.cpython-312.pyc
Normal file
BIN
src/impakt/io/__pycache__/mme.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/io/__pycache__/mme.cpython-314.pyc
Normal file
BIN
src/impakt/io/__pycache__/mme.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/io/__pycache__/reader.cpython-312.pyc
Normal file
BIN
src/impakt/io/__pycache__/reader.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/io/__pycache__/reader.cpython-314.pyc
Normal file
BIN
src/impakt/io/__pycache__/reader.cpython-314.pyc
Normal file
Binary file not shown.
693
src/impakt/io/mme.py
Normal file
693
src/impakt/io/mme.py
Normal file
@@ -0,0 +1,693 @@
|
||||
"""ISO 13499 MME format reader.
|
||||
|
||||
Reads the real ISO/TS 13499 directory structure:
|
||||
<TestID>/
|
||||
<TestID>.mme # Master metadata (key :value format)
|
||||
Channel/
|
||||
<TestID>.chn # Channel index (lists all channels)
|
||||
<TestID>.001 # Channel 1: header + data in one file
|
||||
<TestID>.002 # Channel 2: header + data in one file
|
||||
...
|
||||
|
||||
Also supports the simplified format used by our synthetic fixture generator:
|
||||
<TestID>/
|
||||
MME.ini # Master metadata (INI format)
|
||||
channels/
|
||||
<ChannelCode>.chn # Per-channel header
|
||||
<ChannelCode>.dat # Per-channel data
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import configparser
|
||||
import logging
|
||||
import re
|
||||
from datetime import date, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from impakt.channel.code import ChannelCode
|
||||
from impakt.channel.lookup import normalize_unit
|
||||
from impakt.channel.model import (
|
||||
Channel,
|
||||
DummyInfo,
|
||||
ImpactConfig,
|
||||
TestData,
|
||||
TestMetadata,
|
||||
VehicleInfo,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _parse_float(value: str, default: float = 0.0) -> float:
|
||||
try:
|
||||
v = value.strip()
|
||||
if v.upper() in ("NOVALUE", "", "UNKNOWN"):
|
||||
return default
|
||||
return float(v)
|
||||
except (ValueError, AttributeError):
|
||||
return default
|
||||
|
||||
|
||||
def _parse_int(value: str, default: int = 0) -> int:
|
||||
try:
|
||||
v = value.strip()
|
||||
if v.upper() in ("NOVALUE", "", "UNKNOWN"):
|
||||
return default
|
||||
return int(float(v))
|
||||
except (ValueError, AttributeError):
|
||||
return default
|
||||
|
||||
|
||||
def _novalue(s: str) -> str:
|
||||
"""Return empty string if the value is NOVALUE or empty."""
|
||||
stripped = s.strip()
|
||||
if stripped.upper() in ("NOVALUE", ""):
|
||||
return ""
|
||||
return stripped
|
||||
|
||||
|
||||
UNIT_MAP: dict[str, str] = {
|
||||
"m/(s*s)": "m/s²",
|
||||
"m/s**2": "m/s²",
|
||||
"m/s^2": "m/s²",
|
||||
"m/s2": "m/s²",
|
||||
"m/(s²)": "m/s²",
|
||||
"g": "g",
|
||||
"G": "g",
|
||||
"N": "N",
|
||||
"kN": "kN",
|
||||
"Nm": "N·m",
|
||||
"N.m": "N·m",
|
||||
"N*m": "N·m",
|
||||
"Nm": "N·m",
|
||||
"mm": "mm",
|
||||
"m": "m",
|
||||
"cm": "cm",
|
||||
"m/s": "m/s",
|
||||
"km/h": "km/h",
|
||||
"deg": "deg",
|
||||
"rad": "rad",
|
||||
"rad/s": "rad/s",
|
||||
"kPa": "kPa",
|
||||
"1": "",
|
||||
"-": "",
|
||||
}
|
||||
|
||||
|
||||
def _normalize_unit(raw: str) -> str:
|
||||
stripped = raw.strip()
|
||||
return UNIT_MAP.get(stripped, normalize_unit(stripped))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# ISO 13499 key:value parser
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _parse_mme_keyvalue(text: str) -> dict[str, str | list[str]]:
|
||||
"""Parse the ISO 13499 key:value format.
|
||||
|
||||
Lines are formatted as:
|
||||
Key name padded with spaces :value
|
||||
|
||||
Some keys (e.g., Comments) can appear multiple times.
|
||||
"""
|
||||
result: dict[str, str | list[str]] = {}
|
||||
for line in text.splitlines():
|
||||
line = line.rstrip()
|
||||
if not line or line.startswith("#") or line.startswith(";"):
|
||||
continue
|
||||
# Split at first colon, but the key may have a colon in padded area
|
||||
# The format is: key (padded to ~30 chars) + ":" + value
|
||||
match = re.match(r"^(.+?)\s*:(.*)$", line)
|
||||
if not match:
|
||||
continue
|
||||
key = match.group(1).strip().lower()
|
||||
value = match.group(2).strip()
|
||||
|
||||
if key in result:
|
||||
existing = result[key]
|
||||
if isinstance(existing, list):
|
||||
existing.append(value)
|
||||
else:
|
||||
result[key] = [existing, value]
|
||||
else:
|
||||
result[key] = value
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def _get_val(data: dict[str, str | list[str]], key: str, default: str = "") -> str:
|
||||
"""Get a single string value from parsed MME data."""
|
||||
val = data.get(key, default)
|
||||
if isinstance(val, list):
|
||||
return val[0] if val else default
|
||||
return _novalue(str(val)) if val else default
|
||||
|
||||
|
||||
def _get_list(data: dict[str, str | list[str]], key: str) -> list[str]:
|
||||
"""Get a list of values for a repeated key."""
|
||||
val = data.get(key, [])
|
||||
if isinstance(val, list):
|
||||
return val
|
||||
return [val] if val else []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# .mme master file parser
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _parse_master_mme(path: Path) -> TestMetadata:
|
||||
"""Parse the .mme master metadata file (ISO 13499 format)."""
|
||||
text = path.read_text(encoding="utf-8", errors="replace")
|
||||
data = _parse_mme_keyvalue(text)
|
||||
|
||||
# Parse date
|
||||
test_date: date | None = None
|
||||
date_str = _get_val(data, "date of the test")
|
||||
if date_str:
|
||||
for fmt in ("%Y-%m-%d", "%d/%m/%Y", "%m/%d/%Y", "%Y%m%d"):
|
||||
try:
|
||||
test_date = datetime.strptime(date_str, fmt).date()
|
||||
break
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
# Vehicle info — from test object 1
|
||||
vehicle_name = _get_val(data, "name of test object 1")
|
||||
velocity_str = _get_val(data, "velocity test object 1")
|
||||
mass_str = _get_val(data, "mass test object 1")
|
||||
|
||||
# Try to extract make/model from the vehicle name
|
||||
make, model = "", ""
|
||||
if vehicle_name:
|
||||
parts = vehicle_name.split(None, 1)
|
||||
if len(parts) >= 2:
|
||||
make, model = parts[0], parts[1]
|
||||
elif parts:
|
||||
make = parts[0]
|
||||
|
||||
# Speed: the .mme stores in m/s, convert to km/h
|
||||
speed_ms = _parse_float(velocity_str)
|
||||
speed_kmh = speed_ms * 3.6
|
||||
|
||||
vehicle = VehicleInfo(
|
||||
make=make,
|
||||
model=model,
|
||||
mass_kg=_parse_float(mass_str),
|
||||
)
|
||||
|
||||
# Impact config
|
||||
test_type = _get_val(data, "type of the test") or _get_val(data, "subtype of the test")
|
||||
regulation = _get_val(data, "regulation")
|
||||
offset = _get_val(data, ".offset 1")
|
||||
|
||||
impact = ImpactConfig(
|
||||
test_type=test_type,
|
||||
speed_kmh=speed_kmh,
|
||||
overlap_percent=_parse_float(offset),
|
||||
standard=regulation,
|
||||
)
|
||||
|
||||
# Dummy info — inferred from channel codes later; basic from .mme
|
||||
dummy = DummyInfo()
|
||||
|
||||
# Test number
|
||||
test_number = (
|
||||
_get_val(data, "customer test ref. number")
|
||||
or _get_val(data, "laboratory test ref. number")
|
||||
or path.stem
|
||||
)
|
||||
|
||||
# Comments
|
||||
comments = _get_list(data, "comments")
|
||||
description = " ".join(comments) if comments else ""
|
||||
|
||||
extra: dict[str, Any] = {}
|
||||
for k, v in data.items():
|
||||
if isinstance(v, list):
|
||||
extra[k] = "; ".join(v)
|
||||
else:
|
||||
extra[k] = v
|
||||
|
||||
return TestMetadata(
|
||||
test_number=test_number,
|
||||
test_date=test_date,
|
||||
test_facility=_get_val(data, "laboratory name"),
|
||||
description=description or _get_val(data, "title"),
|
||||
vehicle=vehicle,
|
||||
dummy=dummy,
|
||||
impact=impact,
|
||||
extra=extra,
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# .chn channel index parser
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _parse_channel_index(path: Path) -> list[tuple[int, str, str]]:
|
||||
"""Parse the .chn channel index file.
|
||||
|
||||
Returns list of (channel_number, channel_code, channel_label).
|
||||
"""
|
||||
text = path.read_text(encoding="utf-8", errors="replace")
|
||||
data = _parse_mme_keyvalue(text)
|
||||
channels: list[tuple[int, str, str]] = []
|
||||
|
||||
# Parse "Name of channel NNN" entries
|
||||
for key, value in data.items():
|
||||
match = re.match(r"name of channel (\d+)", key)
|
||||
if match:
|
||||
num = int(match.group(1))
|
||||
val = str(value) if not isinstance(value, list) else value[0]
|
||||
# Value format: "11HEAD0000H3ACXP / HDCG" or just "11HEAD0000H3ACXP"
|
||||
parts = val.split("/", 1)
|
||||
code = parts[0].strip()
|
||||
label = parts[1].strip() if len(parts) > 1 else ""
|
||||
channels.append((num, code, label))
|
||||
|
||||
channels.sort(key=lambda x: x[0])
|
||||
return channels
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# .NNN individual channel data file parser
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _parse_channel_file(path: Path) -> tuple[dict[str, str], NDArray[np.float64]]:
|
||||
"""Parse an individual channel data file (.001, .002, etc.).
|
||||
|
||||
These files contain a header section (key:value) followed by
|
||||
numerical data (one value per line).
|
||||
|
||||
Returns (header_dict, data_array).
|
||||
"""
|
||||
text = path.read_text(encoding="utf-8", errors="replace")
|
||||
lines = text.splitlines()
|
||||
|
||||
header: dict[str, str] = {}
|
||||
data_lines: list[str] = []
|
||||
in_data = False
|
||||
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
if not in_data:
|
||||
# Try to parse as key:value
|
||||
match = re.match(r"^(.+?)\s*:(.*)$", line)
|
||||
if match:
|
||||
key = match.group(1).strip().lower()
|
||||
value = match.group(2).strip()
|
||||
header[key] = value
|
||||
else:
|
||||
# This might be the start of data
|
||||
try:
|
||||
float(line)
|
||||
in_data = True
|
||||
data_lines.append(line)
|
||||
except ValueError:
|
||||
# Skip unparseable lines
|
||||
pass
|
||||
else:
|
||||
data_lines.append(line)
|
||||
|
||||
# Parse data values
|
||||
values: list[float] = []
|
||||
for dl in data_lines:
|
||||
try:
|
||||
values.append(float(dl))
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
data = np.array(values, dtype=np.float64) if values else np.array([], dtype=np.float64)
|
||||
|
||||
return header, data
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# INI-format master file parser (for our synthetic fixtures)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _parse_master_ini(path: Path) -> TestMetadata:
|
||||
"""Parse an INI-style master metadata file (synthetic fixture format)."""
|
||||
config = configparser.ConfigParser(interpolation=None)
|
||||
try:
|
||||
config.read(str(path), encoding="utf-8")
|
||||
except (configparser.Error, UnicodeDecodeError):
|
||||
try:
|
||||
config.read(str(path), encoding="latin-1")
|
||||
except configparser.Error:
|
||||
return TestMetadata(test_number=path.parent.name)
|
||||
|
||||
def get_value(*keys: str) -> str:
|
||||
for section in config.sections():
|
||||
for key in keys:
|
||||
try:
|
||||
return config.get(section, key)
|
||||
except (configparser.NoOptionError, configparser.NoSectionError):
|
||||
continue
|
||||
return ""
|
||||
|
||||
test_date: date | None = None
|
||||
date_str = get_value("test_date", "testdate", "date")
|
||||
if date_str:
|
||||
for fmt in ("%Y-%m-%d", "%d/%m/%Y", "%m/%d/%Y"):
|
||||
try:
|
||||
test_date = datetime.strptime(date_str.strip(), fmt).date()
|
||||
break
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
vehicle = VehicleInfo(
|
||||
make=get_value("vehicle_make", "vehiclemake", "make"),
|
||||
model=get_value("vehicle_model", "vehiclemodel", "model"),
|
||||
year=_parse_int(get_value("vehicle_year", "vehicleyear", "year")),
|
||||
vin=get_value("vin", "vehicle_vin"),
|
||||
mass_kg=_parse_float(get_value("vehicle_mass", "vehiclemass", "mass_kg")),
|
||||
vehicle_type=get_value("vehicle_type", "vehicletype"),
|
||||
)
|
||||
dummy = DummyInfo(
|
||||
dummy_type=get_value("dummy_type", "dummytype", "dummy"),
|
||||
serial=get_value("dummy_serial", "dummyserial"),
|
||||
position=get_value("dummy_position", "dummyposition", "position"),
|
||||
mass_kg=_parse_float(get_value("dummy_mass", "dummymass")),
|
||||
)
|
||||
impact = ImpactConfig(
|
||||
test_type=get_value("test_type", "testtype", "impact_type"),
|
||||
speed_kmh=_parse_float(get_value("test_speed", "testspeed", "impact_speed", "speed")),
|
||||
barrier_type=get_value("barrier_type", "barriertype", "barrier"),
|
||||
overlap_percent=_parse_float(get_value("overlap", "overlap_percent", "overlappercent")),
|
||||
standard=get_value("test_standard", "teststandard", "standard", "regulation"),
|
||||
)
|
||||
|
||||
return TestMetadata(
|
||||
test_number=get_value("test_number", "testnumber", "test_id") or path.parent.name,
|
||||
test_date=test_date,
|
||||
test_facility=get_value("test_facility", "testfacility", "facility", "lab"),
|
||||
description=get_value("description", "test_description", "comment"),
|
||||
vehicle=vehicle,
|
||||
dummy=dummy,
|
||||
impact=impact,
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# INI-format channel reader (synthetic fixtures)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _read_ini_channel(chn_path: Path, test_id: str) -> Channel | None:
|
||||
"""Read a channel from separate .chn header + .dat data files (synthetic)."""
|
||||
config = configparser.ConfigParser(interpolation=None)
|
||||
try:
|
||||
config.read(str(chn_path), encoding="utf-8")
|
||||
except configparser.Error:
|
||||
return None
|
||||
|
||||
# Extract fields from any section
|
||||
items: dict[str, str] = {}
|
||||
for section in config.sections():
|
||||
items.update(dict(config.items(section)))
|
||||
|
||||
channel_code = items.get("channel_code", items.get("code", chn_path.stem))
|
||||
unit = _normalize_unit(items.get("unit", items.get("units", "")))
|
||||
sample_rate = _parse_float(items.get("sample_rate", items.get("samplerate", "0")))
|
||||
dt = _parse_float(items.get("dt", items.get("time_step", "0")))
|
||||
pre_trigger = _parse_int(items.get("pre_trigger", items.get("pretrigger", "0")))
|
||||
cfc_class_val = _parse_int(items.get("cfc", items.get("cfc_class", "-1")), -1)
|
||||
cfc_class = cfc_class_val if cfc_class_val > 0 else None
|
||||
|
||||
if sample_rate == 0 and dt > 0:
|
||||
sample_rate = 1.0 / dt
|
||||
elif dt == 0 and sample_rate > 0:
|
||||
dt = 1.0 / sample_rate
|
||||
|
||||
# Find data file
|
||||
dat_path = chn_path.with_suffix(".dat")
|
||||
if not dat_path.exists():
|
||||
for ext in (".DAT", ".bin", ".BIN"):
|
||||
candidate = chn_path.with_suffix(ext)
|
||||
if candidate.exists():
|
||||
dat_path = candidate
|
||||
break
|
||||
else:
|
||||
return None
|
||||
|
||||
data = np.loadtxt(str(dat_path), dtype=np.float64)
|
||||
if data.ndim > 1:
|
||||
data = data[:, -1]
|
||||
if len(data) == 0:
|
||||
return None
|
||||
|
||||
num_samples = len(data)
|
||||
if dt <= 0:
|
||||
dt = 1.0 / 20000.0
|
||||
sample_rate = 20000.0
|
||||
|
||||
t_start = -pre_trigger * dt
|
||||
time = np.arange(num_samples, dtype=np.float64) * dt + t_start
|
||||
|
||||
code = ChannelCode.parse(channel_code)
|
||||
|
||||
return Channel(
|
||||
name=channel_code,
|
||||
code=code,
|
||||
data=data,
|
||||
time=time,
|
||||
unit=unit or code.measurement_unit,
|
||||
sample_rate=sample_rate,
|
||||
cfc_class=cfc_class,
|
||||
metadata={},
|
||||
source_test_id=test_id,
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# MME Reader
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class MMEReader:
|
||||
"""Reader for ISO 13499 MME format.
|
||||
|
||||
Supports two variants:
|
||||
1. Real ISO 13499: .mme master + Channel/<TestID>.chn index + .NNN data files
|
||||
2. Simplified: MME.ini master + channels/<Code>.chn + <Code>.dat pairs
|
||||
"""
|
||||
|
||||
@property
|
||||
def format_name(self) -> str:
|
||||
return "ISO 13499 MME"
|
||||
|
||||
def supports(self, path: Path) -> bool:
|
||||
path = Path(path)
|
||||
if not path.is_dir():
|
||||
return False
|
||||
# Check for real ISO 13499 format: *.mme file
|
||||
if list(path.glob("*.mme")) or list(path.glob("*.MME")):
|
||||
return True
|
||||
# Check for INI format: MME.ini
|
||||
for name in ("MME.ini", "mme.ini", "MME.INI"):
|
||||
if (path / name).exists():
|
||||
return True
|
||||
# Check for .chn files
|
||||
if list(path.rglob("*.chn")) or list(path.rglob("*.CHN")):
|
||||
return True
|
||||
return False
|
||||
|
||||
def metadata(self, path: Path) -> TestMetadata:
|
||||
path = Path(path).resolve()
|
||||
master = self._find_master(path)
|
||||
if master is None:
|
||||
return TestMetadata(test_number=path.name)
|
||||
if master.suffix.lower() == ".mme":
|
||||
return _parse_master_mme(master)
|
||||
else:
|
||||
return _parse_master_ini(master)
|
||||
|
||||
def read(self, path: Path) -> TestData:
|
||||
path = Path(path).resolve()
|
||||
metadata = self.metadata(path)
|
||||
test_id = metadata.test_number or path.name
|
||||
|
||||
# Determine which format variant
|
||||
master = self._find_master(path)
|
||||
if master and master.suffix.lower() == ".mme":
|
||||
channels = self._read_iso_channels(path, test_id)
|
||||
else:
|
||||
channels = self._read_ini_channels(path, test_id)
|
||||
|
||||
logger.info("Loaded %d channels from %s (%s)", len(channels), path.name, test_id)
|
||||
|
||||
return TestData(
|
||||
test_id=test_id,
|
||||
metadata=metadata,
|
||||
channels=channels,
|
||||
path=path,
|
||||
)
|
||||
|
||||
# ----- Format detection -----
|
||||
|
||||
def _find_master(self, path: Path) -> Path | None:
|
||||
"""Find the master metadata file."""
|
||||
# Real ISO format: *.mme
|
||||
for f in sorted(path.glob("*.mme")) + sorted(path.glob("*.MME")):
|
||||
if f.is_file():
|
||||
return f
|
||||
# INI format
|
||||
for name in ("MME.ini", "mme.ini", "MME.INI"):
|
||||
candidate = path / name
|
||||
if candidate.exists():
|
||||
return candidate
|
||||
return None
|
||||
|
||||
def _find_channel_dir(self, path: Path) -> Path | None:
|
||||
"""Find the Channel/ directory."""
|
||||
for name in ("Channel", "channel", "CHANNEL", "channels"):
|
||||
candidate = path / name
|
||||
if candidate.is_dir():
|
||||
return candidate
|
||||
return None
|
||||
|
||||
# ----- ISO 13499 format reading -----
|
||||
|
||||
def _read_iso_channels(self, path: Path, test_id: str) -> dict[str, Channel]:
|
||||
"""Read channels in real ISO 13499 format."""
|
||||
ch_dir = self._find_channel_dir(path)
|
||||
if ch_dir is None:
|
||||
logger.warning("No Channel directory found in %s", path)
|
||||
return {}
|
||||
|
||||
# Find and parse the .chn index file
|
||||
chn_files = list(ch_dir.glob("*.chn")) + list(ch_dir.glob("*.CHN"))
|
||||
if not chn_files:
|
||||
logger.warning("No .chn index file found in %s", ch_dir)
|
||||
return {}
|
||||
|
||||
chn_index = _parse_channel_index(chn_files[0])
|
||||
stem = chn_files[0].stem # e.g., "3239" or "AK3T02FO"
|
||||
|
||||
channels: dict[str, Channel] = {}
|
||||
for ch_num, ch_code, ch_label in chn_index:
|
||||
# Data file: <stem>.<NNN>
|
||||
ext = f".{ch_num:03d}"
|
||||
data_path = ch_dir / f"{stem}{ext}"
|
||||
if not data_path.exists():
|
||||
# Try uppercase stem
|
||||
for candidate in ch_dir.glob(f"*{ext}"):
|
||||
data_path = candidate
|
||||
break
|
||||
else:
|
||||
logger.debug("Data file not found for channel %d (%s)", ch_num, ch_code)
|
||||
continue
|
||||
|
||||
try:
|
||||
ch = self._read_iso_channel_file(data_path, ch_code, ch_label, test_id)
|
||||
if ch is not None:
|
||||
channels[ch.name] = ch
|
||||
except Exception as e:
|
||||
logger.warning("Failed to read channel %s from %s: %s", ch_code, data_path, e)
|
||||
|
||||
return channels
|
||||
|
||||
def _read_iso_channel_file(
|
||||
self,
|
||||
path: Path,
|
||||
expected_code: str,
|
||||
label: str,
|
||||
test_id: str,
|
||||
) -> Channel | None:
|
||||
"""Read a single ISO 13499 channel data file (.001, .002, etc.)."""
|
||||
header, data = _parse_channel_file(path)
|
||||
|
||||
if len(data) == 0:
|
||||
logger.debug("Empty data in %s", path)
|
||||
return None
|
||||
|
||||
# Extract header fields
|
||||
channel_code = header.get("channel code", expected_code).strip()
|
||||
unit = _normalize_unit(header.get("unit", ""))
|
||||
dt = _parse_float(header.get("sampling interval", "0"))
|
||||
t_first = _parse_float(header.get("time of first sample", "0"))
|
||||
num_samples_declared = _parse_int(header.get("number of samples", "0"))
|
||||
cfc_str = header.get("channel frequency class", "")
|
||||
cfc_class: int | None = None
|
||||
if cfc_str and cfc_str.upper() != "NOVALUE":
|
||||
cfc_val = _parse_int(cfc_str, -1)
|
||||
if cfc_val > 0:
|
||||
cfc_class = cfc_val
|
||||
|
||||
# Sample rate from sampling interval
|
||||
sample_rate = 1.0 / dt if dt > 0 else 0.0
|
||||
|
||||
# Build time vector
|
||||
num_samples = len(data)
|
||||
if dt > 0:
|
||||
time = np.arange(num_samples, dtype=np.float64) * dt + t_first
|
||||
else:
|
||||
# Fallback: assume 10kHz
|
||||
sample_rate = 10000.0
|
||||
dt = 1.0 / sample_rate
|
||||
time = np.arange(num_samples, dtype=np.float64) * dt
|
||||
|
||||
# Parse the channel code
|
||||
code = ChannelCode.parse(channel_code)
|
||||
|
||||
# Build metadata from header
|
||||
ch_metadata: dict[str, Any] = {}
|
||||
for k, v in header.items():
|
||||
if v.upper() != "NOVALUE":
|
||||
ch_metadata[k] = v
|
||||
if label:
|
||||
ch_metadata["label"] = label
|
||||
|
||||
return Channel(
|
||||
name=channel_code,
|
||||
code=code,
|
||||
data=data,
|
||||
time=time,
|
||||
unit=unit or code.measurement_unit,
|
||||
sample_rate=sample_rate,
|
||||
cfc_class=cfc_class,
|
||||
metadata=ch_metadata,
|
||||
source_test_id=test_id,
|
||||
)
|
||||
|
||||
# ----- INI format reading (synthetic fixtures) -----
|
||||
|
||||
def _read_ini_channels(self, path: Path, test_id: str) -> dict[str, Channel]:
|
||||
"""Read channels from INI-style .chn/.dat file pairs."""
|
||||
ch_dir = self._find_channel_dir(path)
|
||||
if ch_dir is None:
|
||||
logger.warning("No channel directory found in %s", path)
|
||||
return {}
|
||||
|
||||
chn_files = sorted(ch_dir.glob("*.chn")) + sorted(ch_dir.glob("*.CHN"))
|
||||
channels: dict[str, Channel] = {}
|
||||
|
||||
for chn_path in chn_files:
|
||||
try:
|
||||
ch = _read_ini_channel(chn_path, test_id)
|
||||
if ch is not None:
|
||||
channels[ch.name] = ch
|
||||
except Exception as e:
|
||||
logger.warning("Failed to read channel %s: %s", chn_path, e)
|
||||
|
||||
return channels
|
||||
133
src/impakt/io/reader.py
Normal file
133
src/impakt/io/reader.py
Normal file
@@ -0,0 +1,133 @@
|
||||
"""Reader protocol and registry for crash test data formats.
|
||||
|
||||
All readers implement the ReaderProtocol, enabling pluggable format support.
|
||||
The ReaderRegistry auto-detects formats and dispatches to the correct reader.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Protocol, runtime_checkable
|
||||
|
||||
from impakt.channel.model import TestData, TestMetadata
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ReaderProtocol(Protocol):
|
||||
"""Protocol for crash test data readers.
|
||||
|
||||
Each reader handles one file format (MME, TDMS, CSV, etc.).
|
||||
"""
|
||||
|
||||
@property
|
||||
def format_name(self) -> str:
|
||||
"""Human-readable format name (e.g., 'ISO 13499 MME')."""
|
||||
...
|
||||
|
||||
def supports(self, path: Path) -> bool:
|
||||
"""Check if this reader can handle the given path.
|
||||
|
||||
Args:
|
||||
path: Path to a file or directory.
|
||||
|
||||
Returns:
|
||||
True if this reader can read the given path.
|
||||
"""
|
||||
...
|
||||
|
||||
def metadata(self, path: Path) -> TestMetadata:
|
||||
"""Read only the test metadata without loading channel data.
|
||||
|
||||
Useful for browsing/cataloging tests without the memory cost.
|
||||
"""
|
||||
...
|
||||
|
||||
def read(self, path: Path) -> TestData:
|
||||
"""Read the full test data including all channels.
|
||||
|
||||
Args:
|
||||
path: Path to the test data (file or directory).
|
||||
|
||||
Returns:
|
||||
TestData with all channels populated.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
class ReaderRegistry:
|
||||
"""Registry of available data readers.
|
||||
|
||||
Supports auto-detection: given a path, tries each registered reader
|
||||
in priority order until one claims support.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._readers: list[ReaderProtocol] = []
|
||||
|
||||
def register(self, reader: ReaderProtocol) -> None:
|
||||
"""Register a reader. Later registrations have higher priority."""
|
||||
self._readers.append(reader)
|
||||
logger.info("Registered reader: %s", reader.format_name)
|
||||
|
||||
def detect(self, path: Path) -> ReaderProtocol | None:
|
||||
"""Detect the appropriate reader for a path.
|
||||
|
||||
Tries readers in reverse registration order (latest first).
|
||||
|
||||
Returns:
|
||||
The first reader that supports the path, or None.
|
||||
"""
|
||||
resolved = Path(path).resolve()
|
||||
for reader in reversed(self._readers):
|
||||
if reader.supports(resolved):
|
||||
logger.debug("Detected format: %s for %s", reader.format_name, path)
|
||||
return reader
|
||||
return None
|
||||
|
||||
def read(self, path: Path) -> TestData:
|
||||
"""Read test data, auto-detecting the format.
|
||||
|
||||
Raises:
|
||||
ValueError: If no registered reader supports the path.
|
||||
"""
|
||||
reader = self.detect(path)
|
||||
if reader is None:
|
||||
raise ValueError(
|
||||
f"No reader found for path: {path}\n"
|
||||
f"Registered readers: {[r.format_name for r in self._readers]}"
|
||||
)
|
||||
return reader.read(Path(path).resolve())
|
||||
|
||||
def metadata(self, path: Path) -> TestMetadata:
|
||||
"""Read test metadata, auto-detecting the format."""
|
||||
reader = self.detect(path)
|
||||
if reader is None:
|
||||
raise ValueError(f"No reader found for path: {path}")
|
||||
return reader.metadata(Path(path).resolve())
|
||||
|
||||
@property
|
||||
def readers(self) -> list[ReaderProtocol]:
|
||||
"""List of registered readers."""
|
||||
return list(self._readers)
|
||||
|
||||
|
||||
# Global registry instance
|
||||
_global_registry = ReaderRegistry()
|
||||
|
||||
|
||||
def get_registry() -> ReaderRegistry:
|
||||
"""Get the global reader registry."""
|
||||
return _global_registry
|
||||
|
||||
|
||||
def register_reader(reader: ReaderProtocol) -> None:
|
||||
"""Register a reader in the global registry."""
|
||||
_global_registry.register(reader)
|
||||
|
||||
|
||||
def read(path: str | Path) -> TestData:
|
||||
"""Read test data from a path using the global registry."""
|
||||
return _global_registry.read(Path(path))
|
||||
27
src/impakt/plot/__init__.py
Normal file
27
src/impakt/plot/__init__.py
Normal file
@@ -0,0 +1,27 @@
|
||||
"""Visualization engine built on Plotly."""
|
||||
|
||||
from impakt.plot.cursor import compute_cursor_values, cursor_values_to_dataframe
|
||||
from impakt.plot.engine import PlotEngine, cursor_values
|
||||
from impakt.plot.export import export_plot
|
||||
from impakt.plot.spec import (
|
||||
ChannelRef,
|
||||
Corridor,
|
||||
CorridorStyle,
|
||||
CursorValues,
|
||||
PlotSpec,
|
||||
PlotStyle,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"ChannelRef",
|
||||
"Corridor",
|
||||
"CorridorStyle",
|
||||
"CursorValues",
|
||||
"PlotEngine",
|
||||
"PlotSpec",
|
||||
"PlotStyle",
|
||||
"compute_cursor_values",
|
||||
"cursor_values",
|
||||
"cursor_values_to_dataframe",
|
||||
"export_plot",
|
||||
]
|
||||
BIN
src/impakt/plot/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
src/impakt/plot/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/__init__.cpython-314.pyc
Normal file
BIN
src/impakt/plot/__pycache__/__init__.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/cursor.cpython-312.pyc
Normal file
BIN
src/impakt/plot/__pycache__/cursor.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/cursor.cpython-314.pyc
Normal file
BIN
src/impakt/plot/__pycache__/cursor.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/engine.cpython-312.pyc
Normal file
BIN
src/impakt/plot/__pycache__/engine.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/engine.cpython-314.pyc
Normal file
BIN
src/impakt/plot/__pycache__/engine.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/export.cpython-312.pyc
Normal file
BIN
src/impakt/plot/__pycache__/export.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/export.cpython-314.pyc
Normal file
BIN
src/impakt/plot/__pycache__/export.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/spec.cpython-312.pyc
Normal file
BIN
src/impakt/plot/__pycache__/spec.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/plot/__pycache__/spec.cpython-314.pyc
Normal file
BIN
src/impakt/plot/__pycache__/spec.cpython-314.pyc
Normal file
Binary file not shown.
53
src/impakt/plot/cursor.py
Normal file
53
src/impakt/plot/cursor.py
Normal file
@@ -0,0 +1,53 @@
|
||||
"""Dual X-axis cursor logic.
|
||||
|
||||
Provides interpolation and value readout at two user-selected time points
|
||||
across all plotted channels.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
|
||||
from impakt.channel.model import Channel
|
||||
from impakt.plot.spec import CursorValues
|
||||
|
||||
|
||||
def compute_cursor_values(
|
||||
channels: list[tuple[str, Channel]],
|
||||
x1: float,
|
||||
x2: float,
|
||||
) -> CursorValues:
|
||||
"""Compute interpolated values at two X positions for all channels.
|
||||
|
||||
Args:
|
||||
channels: List of (label, channel) tuples.
|
||||
x1: First cursor time position.
|
||||
x2: Second cursor time position.
|
||||
|
||||
Returns:
|
||||
CursorValues with per-channel interpolated values and deltas.
|
||||
"""
|
||||
values = []
|
||||
for label, ch in channels:
|
||||
v1 = float(np.interp(x1, ch.time, ch.data))
|
||||
v2 = float(np.interp(x2, ch.time, ch.data))
|
||||
values.append(
|
||||
{
|
||||
"label": label,
|
||||
"value_at_x1": v1,
|
||||
"value_at_x2": v2,
|
||||
"delta": v2 - v1,
|
||||
"unit": ch.unit,
|
||||
"channel_name": ch.name,
|
||||
}
|
||||
)
|
||||
|
||||
return CursorValues(x1=x1, x2=x2, values=values)
|
||||
|
||||
|
||||
def cursor_values_to_dataframe(cv: CursorValues) -> pd.DataFrame:
|
||||
"""Convert CursorValues to a pandas DataFrame."""
|
||||
return pd.DataFrame(cv.values)
|
||||
209
src/impakt/plot/engine.py
Normal file
209
src/impakt/plot/engine.py
Normal file
@@ -0,0 +1,209 @@
|
||||
"""Plotly-based plot engine.
|
||||
|
||||
Renders PlotSpec objects into interactive Plotly figures with
|
||||
support for corridors, dual X-cursors, and export.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
import plotly.graph_objects as go
|
||||
|
||||
from impakt.channel.model import Channel
|
||||
from impakt.plot.spec import ChannelRef, Corridor, CursorValues, PlotSpec, PlotStyle
|
||||
|
||||
# Default color palette (colorblind-friendly)
|
||||
DEFAULT_COLORS = [
|
||||
"#1f77b4", # blue
|
||||
"#ff7f0e", # orange
|
||||
"#2ca02c", # green
|
||||
"#d62728", # red
|
||||
"#9467bd", # purple
|
||||
"#8c564b", # brown
|
||||
"#e377c2", # pink
|
||||
"#7f7f7f", # gray
|
||||
"#bcbd22", # olive
|
||||
"#17becf", # cyan
|
||||
]
|
||||
|
||||
|
||||
class PlotEngine:
|
||||
"""Renders PlotSpec into Plotly figures."""
|
||||
|
||||
def render(self, spec: PlotSpec) -> go.Figure:
|
||||
"""Render a PlotSpec into an interactive Plotly figure."""
|
||||
fig = go.Figure()
|
||||
|
||||
# Add corridor fills first (behind data traces)
|
||||
for corridor in spec.corridors:
|
||||
self._add_corridor(fig, corridor)
|
||||
|
||||
# Add channel traces
|
||||
for i, ch_ref in enumerate(spec.channels):
|
||||
ch = ch_ref.channel
|
||||
if ch is None:
|
||||
continue
|
||||
|
||||
# Apply transforms if any
|
||||
if ch_ref.transform_chain:
|
||||
ch = ch_ref.transform_chain.apply(ch)
|
||||
|
||||
style = ch_ref.style
|
||||
color = style.color or DEFAULT_COLORS[i % len(DEFAULT_COLORS)]
|
||||
label = style.label or ch_ref.label
|
||||
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=ch.time,
|
||||
y=ch.data,
|
||||
mode="lines",
|
||||
name=label,
|
||||
line=dict(
|
||||
color=color,
|
||||
width=style.line_width,
|
||||
dash=style.line_dash,
|
||||
),
|
||||
opacity=style.opacity,
|
||||
hovertemplate=f"{label}<br>t=%{{x:.6f}}s<br>%{{y:.4f}} {ch.unit}<extra></extra>",
|
||||
)
|
||||
)
|
||||
|
||||
# Add cursor lines
|
||||
if spec.x_cursors:
|
||||
x1, x2 = spec.x_cursors
|
||||
for x_val, label in [(x1, "x1"), (x2, "x2")]:
|
||||
fig.add_vline(
|
||||
x=x_val,
|
||||
line_dash="dash",
|
||||
line_color="gray",
|
||||
line_width=1,
|
||||
annotation_text=f"{label}={x_val:.6f}s",
|
||||
annotation_position="top",
|
||||
)
|
||||
|
||||
# Layout
|
||||
fig.update_layout(
|
||||
title=spec.title,
|
||||
xaxis_title=spec.x_label,
|
||||
yaxis_title=spec.y_label,
|
||||
showlegend=spec.show_legend,
|
||||
height=spec.height,
|
||||
width=spec.width,
|
||||
template="plotly_white",
|
||||
hovermode="x unified",
|
||||
legend=dict(
|
||||
orientation="h",
|
||||
yanchor="bottom",
|
||||
y=-0.3,
|
||||
xanchor="center",
|
||||
x=0.5,
|
||||
),
|
||||
)
|
||||
|
||||
if spec.show_grid:
|
||||
fig.update_xaxes(showgrid=True, gridwidth=1, gridcolor="rgba(128,128,128,0.2)")
|
||||
fig.update_yaxes(showgrid=True, gridwidth=1, gridcolor="rgba(128,128,128,0.2)")
|
||||
|
||||
if spec.x_range:
|
||||
fig.update_xaxes(range=list(spec.x_range))
|
||||
if spec.y_range:
|
||||
fig.update_yaxes(range=list(spec.y_range))
|
||||
|
||||
return fig
|
||||
|
||||
def _add_corridor(self, fig: go.Figure, corridor: Corridor) -> None:
|
||||
"""Add a corridor (tolerance band) to the figure."""
|
||||
style = corridor.style
|
||||
|
||||
# Upper bound
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=corridor.time,
|
||||
y=corridor.upper,
|
||||
mode="lines",
|
||||
name=f"{corridor.name} (upper)",
|
||||
line=dict(color=style.line_color, width=style.line_width, dash=style.line_dash),
|
||||
showlegend=False,
|
||||
)
|
||||
)
|
||||
|
||||
# Lower bound with fill to upper
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=corridor.time,
|
||||
y=corridor.lower,
|
||||
mode="lines",
|
||||
name=f"{corridor.name} (lower)",
|
||||
line=dict(color=style.line_color, width=style.line_width, dash=style.line_dash),
|
||||
fill="tonexty",
|
||||
fillcolor=style.fill_color,
|
||||
showlegend=True,
|
||||
)
|
||||
)
|
||||
|
||||
def to_image(self, spec: PlotSpec, format: str = "png", scale: float = 2.0) -> bytes:
|
||||
"""Render to a static image.
|
||||
|
||||
Args:
|
||||
spec: Plot specification.
|
||||
format: Image format ('png', 'svg', 'pdf', 'jpeg').
|
||||
scale: Resolution multiplier.
|
||||
|
||||
Returns:
|
||||
Image bytes.
|
||||
"""
|
||||
fig = self.render(spec)
|
||||
return fig.to_image(format=format, scale=scale)
|
||||
|
||||
def to_html(self, spec: PlotSpec, include_plotlyjs: bool = True) -> str:
|
||||
"""Render to standalone HTML."""
|
||||
fig = self.render(spec)
|
||||
return fig.to_html(include_plotlyjs=include_plotlyjs)
|
||||
|
||||
|
||||
def cursor_values(
|
||||
spec_or_channels: PlotSpec | list[Channel],
|
||||
x1: float,
|
||||
x2: float,
|
||||
) -> CursorValues:
|
||||
"""Compute interpolated values at two X-axis positions.
|
||||
|
||||
Args:
|
||||
spec_or_channels: PlotSpec or list of Channels.
|
||||
x1: First cursor position (time).
|
||||
x2: Second cursor position (time).
|
||||
|
||||
Returns:
|
||||
CursorValues with interpolated values for each channel.
|
||||
"""
|
||||
channels: list[tuple[str, Channel]] = []
|
||||
|
||||
if isinstance(spec_or_channels, PlotSpec):
|
||||
for ref in spec_or_channels.channels:
|
||||
ch = ref.channel
|
||||
if ch is None:
|
||||
continue
|
||||
if ref.transform_chain:
|
||||
ch = ref.transform_chain.apply(ch)
|
||||
channels.append((ref.label, ch))
|
||||
else:
|
||||
for ch in spec_or_channels:
|
||||
channels.append((ch.code.short_label if ch.code.is_valid else ch.name, ch))
|
||||
|
||||
values = []
|
||||
for label, ch in channels:
|
||||
v1 = ch.value_at(x1)
|
||||
v2 = ch.value_at(x2)
|
||||
values.append(
|
||||
{
|
||||
"label": label,
|
||||
"value_at_x1": v1,
|
||||
"value_at_x2": v2,
|
||||
"delta": v2 - v1,
|
||||
"unit": ch.unit,
|
||||
}
|
||||
)
|
||||
|
||||
return CursorValues(x1=x1, x2=x2, values=values)
|
||||
47
src/impakt/plot/export.py
Normal file
47
src/impakt/plot/export.py
Normal file
@@ -0,0 +1,47 @@
|
||||
"""Plot export utilities."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from impakt.plot.engine import PlotEngine
|
||||
from impakt.plot.spec import PlotSpec
|
||||
|
||||
|
||||
def export_plot(
|
||||
spec: PlotSpec,
|
||||
path: str | Path,
|
||||
format: str | None = None,
|
||||
scale: float = 2.0,
|
||||
) -> None:
|
||||
"""Export a plot to a file.
|
||||
|
||||
Args:
|
||||
spec: Plot specification.
|
||||
path: Output file path.
|
||||
format: Image format. Inferred from extension if not provided.
|
||||
scale: Resolution multiplier for raster formats.
|
||||
"""
|
||||
path = Path(path)
|
||||
|
||||
if format is None:
|
||||
ext = path.suffix.lower().lstrip(".")
|
||||
format_map = {
|
||||
"png": "png",
|
||||
"jpg": "jpeg",
|
||||
"jpeg": "jpeg",
|
||||
"svg": "svg",
|
||||
"pdf": "pdf",
|
||||
"html": "html",
|
||||
}
|
||||
format = format_map.get(ext, "png")
|
||||
|
||||
engine = PlotEngine()
|
||||
|
||||
if format == "html":
|
||||
html = engine.to_html(spec)
|
||||
path.write_text(html, encoding="utf-8")
|
||||
else:
|
||||
image_bytes = engine.to_image(spec, format=format, scale=scale)
|
||||
path.write_bytes(image_bytes)
|
||||
170
src/impakt/plot/spec.py
Normal file
170
src/impakt/plot/spec.py
Normal file
@@ -0,0 +1,170 @@
|
||||
"""Plot specification models.
|
||||
|
||||
Defines the declarative structure for plots: what channels to show,
|
||||
how to style them, corridors, cursor positions, etc.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from impakt.channel.model import Channel
|
||||
from impakt.transform.base import TransformChain
|
||||
|
||||
|
||||
@dataclass
|
||||
class PlotStyle:
|
||||
"""Visual style for a single trace."""
|
||||
|
||||
color: str = ""
|
||||
line_width: float = 1.5
|
||||
line_dash: str = "solid" # solid, dash, dot, dashdot
|
||||
label: str = ""
|
||||
opacity: float = 1.0
|
||||
|
||||
|
||||
@dataclass
|
||||
class CorridorStyle:
|
||||
"""Visual style for a tolerance corridor."""
|
||||
|
||||
fill_color: str = "rgba(100, 100, 255, 0.15)"
|
||||
line_color: str = "rgba(100, 100, 255, 0.4)"
|
||||
line_width: float = 1.0
|
||||
line_dash: str = "dash"
|
||||
|
||||
|
||||
@dataclass
|
||||
class Corridor:
|
||||
"""A tolerance corridor (min/max envelope).
|
||||
|
||||
Can be loaded from file (CSV with time, lower, upper columns)
|
||||
or defined programmatically.
|
||||
"""
|
||||
|
||||
name: str
|
||||
upper: NDArray[np.floating[Any]]
|
||||
lower: NDArray[np.floating[Any]]
|
||||
time: NDArray[np.floating[Any]]
|
||||
style: CorridorStyle = field(default_factory=CorridorStyle)
|
||||
|
||||
@classmethod
|
||||
def from_csv(cls, path: str, name: str = "") -> Corridor:
|
||||
"""Load a corridor from a CSV file.
|
||||
|
||||
Expected columns: time, lower, upper (or time, min, max).
|
||||
"""
|
||||
data = np.loadtxt(path, delimiter=",", skiprows=1)
|
||||
return cls(
|
||||
name=name or path,
|
||||
time=data[:, 0],
|
||||
lower=data[:, 1],
|
||||
upper=data[:, 2],
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_bounds(
|
||||
cls,
|
||||
time: NDArray[np.floating[Any]],
|
||||
center: NDArray[np.floating[Any]],
|
||||
tolerance: float,
|
||||
name: str = "",
|
||||
) -> Corridor:
|
||||
"""Create a corridor as center +/- tolerance."""
|
||||
return cls(
|
||||
name=name,
|
||||
time=time,
|
||||
lower=center - tolerance,
|
||||
upper=center + tolerance,
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChannelRef:
|
||||
"""Reference to a channel within a plot specification.
|
||||
|
||||
Can refer to a channel by test_id + channel_name, or hold a
|
||||
direct Channel object.
|
||||
"""
|
||||
|
||||
test_id: str = ""
|
||||
channel_name: str = ""
|
||||
channel: Channel | None = None
|
||||
transform_chain: TransformChain | None = None
|
||||
style: PlotStyle = field(default_factory=PlotStyle)
|
||||
|
||||
def resolve(self, channel: Channel | None = None) -> Channel:
|
||||
"""Resolve to a concrete Channel, applying transforms if any."""
|
||||
ch = channel or self.channel
|
||||
if ch is None:
|
||||
raise ValueError(
|
||||
f"Cannot resolve ChannelRef: no channel provided "
|
||||
f"(test_id={self.test_id}, name={self.channel_name})"
|
||||
)
|
||||
if self.transform_chain:
|
||||
ch = self.transform_chain.apply(ch)
|
||||
return ch
|
||||
|
||||
@property
|
||||
def label(self) -> str:
|
||||
"""Display label for this channel reference."""
|
||||
if self.style.label:
|
||||
return self.style.label
|
||||
parts = []
|
||||
if self.test_id:
|
||||
parts.append(self.test_id)
|
||||
if self.channel_name:
|
||||
parts.append(self.channel_name)
|
||||
elif self.channel:
|
||||
parts.append(self.channel.code.short_label)
|
||||
return "/".join(parts) if parts else "Unknown"
|
||||
|
||||
|
||||
@dataclass
|
||||
class CursorValues:
|
||||
"""Values at the two X-axis cursor positions."""
|
||||
|
||||
x1: float
|
||||
x2: float
|
||||
values: list[dict[str, Any]] = field(default_factory=list)
|
||||
# Each dict: {label, value_at_x1, value_at_x2, delta, unit}
|
||||
|
||||
def as_table(self) -> str:
|
||||
"""Format as a text table."""
|
||||
if not self.values:
|
||||
return f"Cursors at x1={self.x1:.6f}, x2={self.x2:.6f} — no channels"
|
||||
|
||||
lines = [
|
||||
f"{'Channel':<40} {'@ x1':>12} {'@ x2':>12} {'Delta':>12} {'Unit':<8}",
|
||||
"-" * 88,
|
||||
]
|
||||
for v in self.values:
|
||||
lines.append(
|
||||
f"{v['label']:<40} {v['value_at_x1']:>12.4f} "
|
||||
f"{v['value_at_x2']:>12.4f} {v['delta']:>12.4f} {v.get('unit', ''):<8}"
|
||||
)
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
@dataclass
|
||||
class PlotSpec:
|
||||
"""Complete specification for a single plot.
|
||||
|
||||
This is the declarative description that PlotEngine renders.
|
||||
"""
|
||||
|
||||
channels: list[ChannelRef] = field(default_factory=list)
|
||||
corridors: list[Corridor] = field(default_factory=list)
|
||||
x_cursors: tuple[float, float] | None = None
|
||||
x_range: tuple[float, float] | None = None
|
||||
y_range: tuple[float, float] | None = None
|
||||
title: str = ""
|
||||
x_label: str = "Time (s)"
|
||||
y_label: str = ""
|
||||
show_legend: bool = True
|
||||
show_grid: bool = True
|
||||
height: int = 500
|
||||
width: int = 900
|
||||
1
src/impakt/plugin/__init__.py
Normal file
1
src/impakt/plugin/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Plugin system for extensibility."""
|
||||
175
src/impakt/plugin/registry.py
Normal file
175
src/impakt/plugin/registry.py
Normal file
@@ -0,0 +1,175 @@
|
||||
"""Plugin registry and discovery.
|
||||
|
||||
Plugins can extend Impakt with custom readers, transforms, injury criteria,
|
||||
protocol scorers, and report templates.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Protocol, runtime_checkable
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ImpaktPlugin(Protocol):
|
||||
"""Protocol for Impakt plugins."""
|
||||
|
||||
@property
|
||||
def name(self) -> str: ...
|
||||
|
||||
@property
|
||||
def version(self) -> str: ...
|
||||
|
||||
def register(self, registry: PluginRegistry) -> None:
|
||||
"""Register plugin components with the registry."""
|
||||
...
|
||||
|
||||
|
||||
class PluginRegistry:
|
||||
"""Central registry for all plugin-provided extensions."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._readers: list[Any] = []
|
||||
self._transforms: list[Any] = []
|
||||
self._criteria: list[Any] = []
|
||||
self._protocols: list[Any] = []
|
||||
self._report_templates: list[Any] = []
|
||||
self._plugins: list[ImpaktPlugin] = []
|
||||
|
||||
def register_reader(self, reader: Any) -> None:
|
||||
"""Register a data reader."""
|
||||
self._readers.append(reader)
|
||||
logger.info("Plugin reader registered: %s", getattr(reader, "format_name", reader))
|
||||
|
||||
def register_transform(self, name: str, transform_cls: type) -> None:
|
||||
"""Register a transform type."""
|
||||
self._transforms.append((name, transform_cls))
|
||||
# Also register with the transform registry
|
||||
from impakt.transform.base import _registry
|
||||
|
||||
_registry.register(name, transform_cls)
|
||||
logger.info("Plugin transform registered: %s", name)
|
||||
|
||||
def register_criterion(self, criterion: Any) -> None:
|
||||
"""Register an injury criterion."""
|
||||
self._criteria.append(criterion)
|
||||
logger.info("Plugin criterion registered: %s", getattr(criterion, "name", criterion))
|
||||
|
||||
def register_protocol(self, protocol: Any) -> None:
|
||||
"""Register a rating protocol scorer."""
|
||||
self._protocols.append(protocol)
|
||||
logger.info("Plugin protocol registered: %s", getattr(protocol, "protocol_name", protocol))
|
||||
|
||||
def register_report_template(self, name: str, template: Any) -> None:
|
||||
"""Register a report template."""
|
||||
self._report_templates.append((name, template))
|
||||
logger.info("Plugin report template registered: %s", name)
|
||||
|
||||
@property
|
||||
def readers(self) -> list[Any]:
|
||||
return list(self._readers)
|
||||
|
||||
@property
|
||||
def transforms(self) -> list[tuple[str, type]]:
|
||||
return list(self._transforms)
|
||||
|
||||
@property
|
||||
def criteria(self) -> list[Any]:
|
||||
return list(self._criteria)
|
||||
|
||||
@property
|
||||
def protocols(self) -> list[Any]:
|
||||
return list(self._protocols)
|
||||
|
||||
@property
|
||||
def report_templates(self) -> list[tuple[str, Any]]:
|
||||
return list(self._report_templates)
|
||||
|
||||
@property
|
||||
def plugins(self) -> list[ImpaktPlugin]:
|
||||
return list(self._plugins)
|
||||
|
||||
def register_plugin(self, plugin: ImpaktPlugin) -> None:
|
||||
"""Register a complete plugin."""
|
||||
self._plugins.append(plugin)
|
||||
plugin.register(self)
|
||||
logger.info("Plugin registered: %s v%s", plugin.name, plugin.version)
|
||||
|
||||
|
||||
# Global registry
|
||||
_global_plugin_registry = PluginRegistry()
|
||||
|
||||
|
||||
def get_plugin_registry() -> PluginRegistry:
|
||||
"""Get the global plugin registry."""
|
||||
return _global_plugin_registry
|
||||
|
||||
|
||||
def discover_entry_points() -> None:
|
||||
"""Discover plugins via Python entry points.
|
||||
|
||||
Looks for entry points in the ``impakt.plugins`` group.
|
||||
"""
|
||||
if sys.version_info >= (3, 12):
|
||||
from importlib.metadata import entry_points
|
||||
|
||||
eps = entry_points(group="impakt.plugins")
|
||||
else:
|
||||
from importlib.metadata import entry_points
|
||||
|
||||
all_eps = entry_points()
|
||||
eps = all_eps.get("impakt.plugins", []) # type: ignore[assignment]
|
||||
|
||||
for ep in eps:
|
||||
try:
|
||||
plugin_cls = ep.load()
|
||||
plugin = plugin_cls()
|
||||
_global_plugin_registry.register_plugin(plugin)
|
||||
except Exception as e:
|
||||
logger.warning("Failed to load plugin %s: %s", ep.name, e)
|
||||
|
||||
|
||||
def discover_directory(path: Path | None = None) -> None:
|
||||
"""Discover plugins from a directory.
|
||||
|
||||
Looks for Python files in ``~/.impakt/plugins/`` (default)
|
||||
that define an ``ImpaktPlugin`` class.
|
||||
"""
|
||||
if path is None:
|
||||
path = Path.home() / ".impakt" / "plugins"
|
||||
|
||||
if not path.exists():
|
||||
return
|
||||
|
||||
for py_file in sorted(path.glob("*.py")):
|
||||
try:
|
||||
spec = importlib.util.spec_from_file_location(f"impakt_plugin_{py_file.stem}", py_file)
|
||||
if spec and spec.loader:
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
|
||||
# Look for ImpaktPlugin implementations
|
||||
for attr_name in dir(module):
|
||||
attr = getattr(module, attr_name)
|
||||
if (
|
||||
isinstance(attr, type)
|
||||
and attr is not ImpaktPlugin
|
||||
and hasattr(attr, "register")
|
||||
and hasattr(attr, "name")
|
||||
and hasattr(attr, "version")
|
||||
):
|
||||
plugin = attr()
|
||||
_global_plugin_registry.register_plugin(plugin)
|
||||
except Exception as e:
|
||||
logger.warning("Failed to load plugin from %s: %s", py_file, e)
|
||||
|
||||
|
||||
def discover_all() -> None:
|
||||
"""Run all plugin discovery mechanisms."""
|
||||
discover_entry_points()
|
||||
discover_directory()
|
||||
21
src/impakt/protocol/__init__.py
Normal file
21
src/impakt/protocol/__init__.py
Normal file
@@ -0,0 +1,21 @@
|
||||
"""Rating protocol scorers (Euro NCAP, US NCAP, IIHS)."""
|
||||
|
||||
from impakt.protocol import euro_ncap, iihs, us_ncap
|
||||
from impakt.protocol.base import (
|
||||
BodyRegionScore,
|
||||
Color,
|
||||
ProtocolResult,
|
||||
ProtocolScorer,
|
||||
Rating,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"BodyRegionScore",
|
||||
"Color",
|
||||
"ProtocolResult",
|
||||
"ProtocolScorer",
|
||||
"Rating",
|
||||
"euro_ncap",
|
||||
"iihs",
|
||||
"us_ncap",
|
||||
]
|
||||
BIN
src/impakt/protocol/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/__init__.cpython-314.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/__init__.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/base.cpython-312.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/base.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/base.cpython-314.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/base.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/euro_ncap.cpython-312.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/euro_ncap.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/euro_ncap.cpython-314.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/euro_ncap.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/iihs.cpython-312.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/iihs.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/iihs.cpython-314.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/iihs.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/us_ncap.cpython-312.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/us_ncap.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/protocol/__pycache__/us_ncap.cpython-314.pyc
Normal file
BIN
src/impakt/protocol/__pycache__/us_ncap.cpython-314.pyc
Normal file
Binary file not shown.
112
src/impakt/protocol/base.py
Normal file
112
src/impakt/protocol/base.py
Normal file
@@ -0,0 +1,112 @@
|
||||
"""Base types for rating protocol scorers."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from typing import Any, Protocol, runtime_checkable
|
||||
|
||||
from impakt.criteria.base import CriterionResult
|
||||
|
||||
|
||||
class Rating(Enum):
|
||||
"""Universal rating levels."""
|
||||
|
||||
GOOD = "good"
|
||||
ACCEPTABLE = "acceptable"
|
||||
MARGINAL = "marginal"
|
||||
POOR = "poor"
|
||||
|
||||
|
||||
class Color(Enum):
|
||||
"""Euro NCAP body region color codes."""
|
||||
|
||||
GREEN = "green"
|
||||
YELLOW = "yellow"
|
||||
ORANGE = "orange"
|
||||
BROWN = "brown"
|
||||
RED = "red"
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class BodyRegionScore:
|
||||
"""Score for a single body region within a protocol evaluation."""
|
||||
|
||||
region: str
|
||||
criterion: str
|
||||
value: float
|
||||
unit: str = ""
|
||||
rating: Rating | None = None
|
||||
color: Color | None = None
|
||||
points: float = 0.0
|
||||
max_points: float = 0.0
|
||||
details: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ProtocolResult:
|
||||
"""Complete result of a protocol evaluation.
|
||||
|
||||
Contains the overall rating/score plus per-region breakdowns.
|
||||
"""
|
||||
|
||||
protocol: str
|
||||
version: str
|
||||
overall_rating: str = ""
|
||||
stars: int | None = None
|
||||
total_points: float = 0.0
|
||||
max_points: float = 0.0
|
||||
percentage: float = 0.0
|
||||
region_scores: list[BodyRegionScore] = field(default_factory=list)
|
||||
details: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
def to_pdf(self, path: str) -> None:
|
||||
"""Generate a protocol report PDF."""
|
||||
from impakt.report.engine import generate_protocol_report
|
||||
|
||||
generate_protocol_report(self, path)
|
||||
|
||||
def summary(self) -> str:
|
||||
"""Human-readable summary of the result."""
|
||||
lines = [f"{self.protocol} {self.version}"]
|
||||
if self.stars is not None:
|
||||
lines.append(f" Rating: {'*' * self.stars} ({self.stars}/5 stars)")
|
||||
if self.overall_rating:
|
||||
lines.append(f" Overall: {self.overall_rating}")
|
||||
if self.max_points > 0:
|
||||
lines.append(
|
||||
f" Score: {self.total_points:.1f}/{self.max_points:.1f} ({self.percentage:.0f}%)"
|
||||
)
|
||||
lines.append("")
|
||||
for rs in self.region_scores:
|
||||
rating_str = ""
|
||||
if rs.color is not None:
|
||||
rating_str = f" [{rs.color.value}]"
|
||||
elif rs.rating is not None:
|
||||
rating_str = f" [{rs.rating.value}]"
|
||||
points_str = f" ({rs.points:.1f}/{rs.max_points:.1f})" if rs.max_points > 0 else ""
|
||||
lines.append(f" {rs.region}: {rs.value:.2f} {rs.unit}{rating_str}{points_str}")
|
||||
return "\n".join(lines)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
if self.stars is not None:
|
||||
return f"ProtocolResult({self.protocol} {self.version}: {self.stars} stars)"
|
||||
return f"ProtocolResult({self.protocol} {self.version}: {self.overall_rating})"
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ProtocolScorer(Protocol):
|
||||
"""Protocol for rating protocol scorers."""
|
||||
|
||||
@property
|
||||
def protocol_name(self) -> str: ...
|
||||
|
||||
@property
|
||||
def version(self) -> str: ...
|
||||
|
||||
def evaluate(
|
||||
self,
|
||||
criteria: dict[str, CriterionResult],
|
||||
) -> ProtocolResult:
|
||||
"""Evaluate criteria results against this protocol."""
|
||||
...
|
||||
193
src/impakt/protocol/euro_ncap.py
Normal file
193
src/impakt/protocol/euro_ncap.py
Normal file
@@ -0,0 +1,193 @@
|
||||
"""Euro NCAP scoring engine.
|
||||
|
||||
Implements the Euro NCAP adult occupant frontal impact scoring methodology.
|
||||
Criteria are mapped to body-region color codes (Green through Red) using
|
||||
sliding-scale performance limits. Points are then derived from colors.
|
||||
|
||||
Threshold values are versioned — this module supports multiple protocol years.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from impakt.criteria.base import CriterionResult
|
||||
from impakt.protocol.base import BodyRegionScore, Color, ProtocolResult
|
||||
|
||||
|
||||
def _color_from_value(
|
||||
value: float,
|
||||
higher_is_worse: bool,
|
||||
green_limit: float,
|
||||
yellow_limit: float,
|
||||
orange_limit: float,
|
||||
brown_limit: float,
|
||||
red_limit: float,
|
||||
) -> Color:
|
||||
"""Map a criterion value to a Euro NCAP color code.
|
||||
|
||||
Uses linear interpolation between limits (sliding scale).
|
||||
"""
|
||||
if higher_is_worse:
|
||||
if value <= green_limit:
|
||||
return Color.GREEN
|
||||
elif value <= yellow_limit:
|
||||
return Color.YELLOW
|
||||
elif value <= orange_limit:
|
||||
return Color.ORANGE
|
||||
elif value <= brown_limit:
|
||||
return Color.BROWN
|
||||
else:
|
||||
return Color.RED
|
||||
else:
|
||||
if value >= green_limit:
|
||||
return Color.GREEN
|
||||
elif value >= yellow_limit:
|
||||
return Color.YELLOW
|
||||
elif value >= orange_limit:
|
||||
return Color.ORANGE
|
||||
elif value >= brown_limit:
|
||||
return Color.BROWN
|
||||
else:
|
||||
return Color.RED
|
||||
|
||||
|
||||
def _points_from_color(color: Color, max_points: float) -> float:
|
||||
"""Map a color to points (linear scale)."""
|
||||
color_fractions = {
|
||||
Color.GREEN: 1.0,
|
||||
Color.YELLOW: 0.75,
|
||||
Color.ORANGE: 0.5,
|
||||
Color.BROWN: 0.25,
|
||||
Color.RED: 0.0,
|
||||
}
|
||||
return max_points * color_fractions[color]
|
||||
|
||||
|
||||
# Threshold sets by year
|
||||
# Format: {criterion: (green, yellow, orange, brown, red, higher_is_worse, max_points)}
|
||||
THRESHOLDS_2024: dict[str, tuple[float, float, float, float, float, bool, float]] = {
|
||||
"HIC15": (500.0, 620.0, 700.0, 850.0, 1000.0, True, 4.0),
|
||||
"3ms Clip": (42.0, 48.0, 54.0, 57.0, 60.0, True, 4.0),
|
||||
"Chest Deflection": (22.0, 34.0, 42.0, 50.0, 63.0, True, 4.0),
|
||||
"Nij": (0.5, 0.65, 0.8, 0.9, 1.0, True, 2.0),
|
||||
"Femur Load Left": (3.8, 5.4, 7.0, 8.5, 10.0, True, 2.0),
|
||||
"Femur Load Right": (3.8, 5.4, 7.0, 8.5, 10.0, True, 2.0),
|
||||
"Tibia Index": (0.4, 0.7, 1.0, 1.15, 1.3, True, 2.0),
|
||||
"Viscous Criterion": (0.32, 0.56, 0.8, 0.9, 1.0, True, 2.0),
|
||||
}
|
||||
|
||||
THRESHOLDS: dict[str, dict[str, tuple[float, float, float, float, float, bool, float]]] = {
|
||||
"2024": THRESHOLDS_2024,
|
||||
}
|
||||
|
||||
|
||||
class EuroNCAP:
|
||||
"""Euro NCAP scorer."""
|
||||
|
||||
def __init__(self, version: str = "2024") -> None:
|
||||
self._version = version
|
||||
if version not in THRESHOLDS:
|
||||
raise ValueError(
|
||||
f"Unknown Euro NCAP version: {version}. Available: {list(THRESHOLDS.keys())}"
|
||||
)
|
||||
self._thresholds = THRESHOLDS[version]
|
||||
|
||||
@property
|
||||
def protocol_name(self) -> str:
|
||||
return "Euro NCAP"
|
||||
|
||||
@property
|
||||
def version(self) -> str:
|
||||
return self._version
|
||||
|
||||
def evaluate(
|
||||
self,
|
||||
criteria: dict[str, CriterionResult],
|
||||
) -> ProtocolResult:
|
||||
"""Score criteria results against Euro NCAP thresholds."""
|
||||
region_scores: list[BodyRegionScore] = []
|
||||
total_points = 0.0
|
||||
max_points = 0.0
|
||||
|
||||
for criterion_name, thresholds in self._thresholds.items():
|
||||
green, yellow, orange, brown, red, higher_is_worse, max_pts = thresholds
|
||||
|
||||
# Find the matching criterion result
|
||||
result = criteria.get(criterion_name)
|
||||
if result is None:
|
||||
# Try partial match
|
||||
for key, res in criteria.items():
|
||||
if (
|
||||
criterion_name.lower() in key.lower()
|
||||
or key.lower() in criterion_name.lower()
|
||||
):
|
||||
result = res
|
||||
break
|
||||
|
||||
if result is None:
|
||||
continue
|
||||
|
||||
color = _color_from_value(
|
||||
result.value,
|
||||
higher_is_worse,
|
||||
green,
|
||||
yellow,
|
||||
orange,
|
||||
brown,
|
||||
red,
|
||||
)
|
||||
points = _points_from_color(color, max_pts)
|
||||
|
||||
region_scores.append(
|
||||
BodyRegionScore(
|
||||
region=result.body_region or criterion_name,
|
||||
criterion=criterion_name,
|
||||
value=result.value,
|
||||
unit=result.unit,
|
||||
color=color,
|
||||
points=points,
|
||||
max_points=max_pts,
|
||||
)
|
||||
)
|
||||
|
||||
total_points += points
|
||||
max_points += max_pts
|
||||
|
||||
percentage = (total_points / max_points * 100.0) if max_points > 0 else 0.0
|
||||
stars = self._percentage_to_stars(percentage)
|
||||
|
||||
return ProtocolResult(
|
||||
protocol=self.protocol_name,
|
||||
version=self._version,
|
||||
overall_rating=f"{stars} stars",
|
||||
stars=stars,
|
||||
total_points=total_points,
|
||||
max_points=max_points,
|
||||
percentage=percentage,
|
||||
region_scores=region_scores,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _percentage_to_stars(pct: float) -> int:
|
||||
"""Convert percentage to star rating."""
|
||||
if pct >= 80:
|
||||
return 5
|
||||
elif pct >= 70:
|
||||
return 4
|
||||
elif pct >= 60:
|
||||
return 3
|
||||
elif pct >= 40:
|
||||
return 2
|
||||
elif pct >= 20:
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
|
||||
def evaluate(
|
||||
criteria: dict[str, CriterionResult],
|
||||
version: str = "2024",
|
||||
) -> ProtocolResult:
|
||||
"""Convenience function for Euro NCAP evaluation."""
|
||||
return EuroNCAP(version=version).evaluate(criteria)
|
||||
151
src/impakt/protocol/iihs.py
Normal file
151
src/impakt/protocol/iihs.py
Normal file
@@ -0,0 +1,151 @@
|
||||
"""IIHS rating engine.
|
||||
|
||||
IIHS rates individual body regions as Good/Acceptable/Marginal/Poor.
|
||||
The overall rating is determined by the worst sub-rating.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from impakt.criteria.base import CriterionResult
|
||||
from impakt.protocol.base import BodyRegionScore, ProtocolResult, Rating
|
||||
|
||||
|
||||
# Thresholds: (good_limit, acceptable_limit, marginal_limit)
|
||||
# Values above marginal_limit = Poor
|
||||
# higher_is_worse indicates that higher values are worse
|
||||
IIHS_THRESHOLDS_2024: dict[str, tuple[float, float, float, bool]] = {
|
||||
"HIC15": (250.0, 500.0, 700.0, True),
|
||||
"Chest Deflection": (38.0, 50.0, 63.0, True), # mm
|
||||
"Femur Load Left": (3.8, 6.2, 10.0, True), # kN
|
||||
"Femur Load Right": (3.8, 6.2, 10.0, True), # kN
|
||||
"Nij": (0.52, 0.78, 1.0, True),
|
||||
"Tibia Index": (0.5, 0.8, 1.3, True),
|
||||
}
|
||||
|
||||
IIHS_THRESHOLDS: dict[str, dict[str, tuple[float, float, float, bool]]] = {
|
||||
"2024": IIHS_THRESHOLDS_2024,
|
||||
}
|
||||
|
||||
|
||||
def _rate_value(
|
||||
value: float,
|
||||
good_limit: float,
|
||||
acceptable_limit: float,
|
||||
marginal_limit: float,
|
||||
higher_is_worse: bool,
|
||||
) -> Rating:
|
||||
"""Rate a single value against IIHS thresholds."""
|
||||
if higher_is_worse:
|
||||
if value <= good_limit:
|
||||
return Rating.GOOD
|
||||
elif value <= acceptable_limit:
|
||||
return Rating.ACCEPTABLE
|
||||
elif value <= marginal_limit:
|
||||
return Rating.MARGINAL
|
||||
else:
|
||||
return Rating.POOR
|
||||
else:
|
||||
if value >= good_limit:
|
||||
return Rating.GOOD
|
||||
elif value >= acceptable_limit:
|
||||
return Rating.ACCEPTABLE
|
||||
elif value >= marginal_limit:
|
||||
return Rating.MARGINAL
|
||||
else:
|
||||
return Rating.POOR
|
||||
|
||||
|
||||
RATING_ORDER = {
|
||||
Rating.GOOD: 0,
|
||||
Rating.ACCEPTABLE: 1,
|
||||
Rating.MARGINAL: 2,
|
||||
Rating.POOR: 3,
|
||||
}
|
||||
|
||||
|
||||
class IIHS:
|
||||
"""IIHS evaluator."""
|
||||
|
||||
def __init__(self, version: str = "2024") -> None:
|
||||
self._version = version
|
||||
if version not in IIHS_THRESHOLDS:
|
||||
raise ValueError(
|
||||
f"Unknown IIHS version: {version}. Available: {list(IIHS_THRESHOLDS.keys())}"
|
||||
)
|
||||
self._thresholds = IIHS_THRESHOLDS[version]
|
||||
|
||||
@property
|
||||
def protocol_name(self) -> str:
|
||||
return "IIHS"
|
||||
|
||||
@property
|
||||
def version(self) -> str:
|
||||
return self._version
|
||||
|
||||
def evaluate(
|
||||
self,
|
||||
criteria: dict[str, CriterionResult],
|
||||
) -> ProtocolResult:
|
||||
"""Rate criteria against IIHS thresholds."""
|
||||
region_scores: list[BodyRegionScore] = []
|
||||
worst_rating = Rating.GOOD
|
||||
|
||||
for criterion_name, (
|
||||
good,
|
||||
acceptable,
|
||||
marginal,
|
||||
higher_is_worse,
|
||||
) in self._thresholds.items():
|
||||
result = criteria.get(criterion_name)
|
||||
if result is None:
|
||||
for key, res in criteria.items():
|
||||
if (
|
||||
criterion_name.lower() in key.lower()
|
||||
or key.lower() in criterion_name.lower()
|
||||
):
|
||||
result = res
|
||||
break
|
||||
|
||||
if result is None:
|
||||
continue
|
||||
|
||||
rating = _rate_value(
|
||||
result.value,
|
||||
good,
|
||||
acceptable,
|
||||
marginal,
|
||||
higher_is_worse,
|
||||
)
|
||||
|
||||
if RATING_ORDER[rating] > RATING_ORDER[worst_rating]:
|
||||
worst_rating = rating
|
||||
|
||||
region_scores.append(
|
||||
BodyRegionScore(
|
||||
region=result.body_region or criterion_name,
|
||||
criterion=criterion_name,
|
||||
value=result.value,
|
||||
unit=result.unit,
|
||||
rating=rating,
|
||||
)
|
||||
)
|
||||
|
||||
return ProtocolResult(
|
||||
protocol=self.protocol_name,
|
||||
version=self._version,
|
||||
overall_rating=worst_rating.value.upper(),
|
||||
region_scores=region_scores,
|
||||
details={
|
||||
"worst_region": worst_rating.value,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def evaluate(
|
||||
criteria: dict[str, CriterionResult],
|
||||
version: str = "2024",
|
||||
) -> ProtocolResult:
|
||||
"""Convenience function for IIHS evaluation."""
|
||||
return IIHS(version=version).evaluate(criteria)
|
||||
140
src/impakt/protocol/us_ncap.py
Normal file
140
src/impakt/protocol/us_ncap.py
Normal file
@@ -0,0 +1,140 @@
|
||||
"""US NCAP (NHTSA) scoring engine.
|
||||
|
||||
NHTSA 5-star system is based on probability of serious injury (AIS 3+).
|
||||
Individual injury risk functions convert measured criteria values to
|
||||
injury probability, which are then combined.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import math
|
||||
from typing import Any
|
||||
|
||||
from impakt.criteria.base import CriterionResult
|
||||
from impakt.protocol.base import BodyRegionScore, ProtocolResult, Rating
|
||||
|
||||
|
||||
def _logistic_risk(value: float, beta0: float, beta1: float) -> float:
|
||||
"""Logistic injury risk function.
|
||||
|
||||
P(AIS 3+) = 1 / (1 + exp(-(beta0 + beta1 * value)))
|
||||
"""
|
||||
z = beta0 + beta1 * value
|
||||
# Clamp to avoid overflow
|
||||
z = max(-500, min(500, z))
|
||||
return 1.0 / (1.0 + math.exp(-z))
|
||||
|
||||
|
||||
# Risk function coefficients (beta0, beta1) for H3 50th
|
||||
# These are approximate — actual NHTSA values are published in FR notices
|
||||
RISK_COEFFICIENTS: dict[str, tuple[float, float]] = {
|
||||
"HIC15": (-7.45231, 0.00690), # Head
|
||||
"3ms Clip": (-8.7635, 0.1058), # Chest acceleration
|
||||
"Chest Deflection": (-5.7606, 0.0861), # Chest deflection (mm)
|
||||
"Nij": (-3.227, 2.885), # Neck
|
||||
"Femur Load": (-5.0952, 0.4897), # Femur (kN)
|
||||
}
|
||||
|
||||
|
||||
def _stars_from_probability(p: float) -> int:
|
||||
"""Convert probability to star rating."""
|
||||
if p <= 0.10:
|
||||
return 5
|
||||
elif p <= 0.20:
|
||||
return 4
|
||||
elif p <= 0.35:
|
||||
return 3
|
||||
elif p <= 0.45:
|
||||
return 2
|
||||
else:
|
||||
return 1
|
||||
|
||||
|
||||
class USNCAP:
|
||||
"""US NCAP (NHTSA) scorer."""
|
||||
|
||||
def __init__(self, version: str = "2023") -> None:
|
||||
self._version = version
|
||||
|
||||
@property
|
||||
def protocol_name(self) -> str:
|
||||
return "US NCAP"
|
||||
|
||||
@property
|
||||
def version(self) -> str:
|
||||
return self._version
|
||||
|
||||
def evaluate(
|
||||
self,
|
||||
criteria: dict[str, CriterionResult],
|
||||
) -> ProtocolResult:
|
||||
"""Score criteria using NHTSA injury risk functions."""
|
||||
region_scores: list[BodyRegionScore] = []
|
||||
probabilities: list[float] = []
|
||||
|
||||
for criterion_name, (beta0, beta1) in RISK_COEFFICIENTS.items():
|
||||
result = criteria.get(criterion_name)
|
||||
if result is None:
|
||||
for key, res in criteria.items():
|
||||
if criterion_name.lower() in key.lower():
|
||||
result = res
|
||||
break
|
||||
|
||||
if result is None:
|
||||
continue
|
||||
|
||||
p = _logistic_risk(result.value, beta0, beta1)
|
||||
probabilities.append(p)
|
||||
|
||||
# Determine rating based on individual probability
|
||||
if p <= 0.10:
|
||||
rating = Rating.GOOD
|
||||
elif p <= 0.20:
|
||||
rating = Rating.ACCEPTABLE
|
||||
elif p <= 0.35:
|
||||
rating = Rating.MARGINAL
|
||||
else:
|
||||
rating = Rating.POOR
|
||||
|
||||
region_scores.append(
|
||||
BodyRegionScore(
|
||||
region=result.body_region or criterion_name,
|
||||
criterion=criterion_name,
|
||||
value=result.value,
|
||||
unit=result.unit,
|
||||
rating=rating,
|
||||
details={"injury_probability": p},
|
||||
)
|
||||
)
|
||||
|
||||
# Combined probability (simplified — assume independence)
|
||||
# P(any injury) = 1 - product(1 - Pi)
|
||||
if probabilities:
|
||||
combined_p = 1.0 - math.prod(1.0 - p for p in probabilities)
|
||||
else:
|
||||
combined_p = 0.0
|
||||
|
||||
stars = _stars_from_probability(combined_p)
|
||||
|
||||
return ProtocolResult(
|
||||
protocol=self.protocol_name,
|
||||
version=self._version,
|
||||
overall_rating=f"{stars} stars",
|
||||
stars=stars,
|
||||
percentage=(1.0 - combined_p) * 100.0,
|
||||
region_scores=region_scores,
|
||||
details={
|
||||
"combined_injury_probability": combined_p,
|
||||
"individual_probabilities": {
|
||||
rs.criterion: rs.details.get("injury_probability", 0) for rs in region_scores
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def evaluate(
|
||||
criteria: dict[str, CriterionResult],
|
||||
version: str = "2023",
|
||||
) -> ProtocolResult:
|
||||
"""Convenience function for US NCAP evaluation."""
|
||||
return USNCAP(version=version).evaluate(criteria)
|
||||
1
src/impakt/report/__init__.py
Normal file
1
src/impakt/report/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""PDF and report generation."""
|
||||
240
src/impakt/report/engine.py
Normal file
240
src/impakt/report/engine.py
Normal file
@@ -0,0 +1,240 @@
|
||||
"""Report generation engine.
|
||||
|
||||
Produces PDF and HTML reports from plot specifications and protocol results.
|
||||
Uses Jinja2 for HTML templating and WeasyPrint for PDF rendering.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from impakt.channel.model import TestMetadata
|
||||
from impakt.criteria.base import CriterionResult
|
||||
from impakt.plot.engine import PlotEngine
|
||||
from impakt.plot.spec import PlotSpec
|
||||
from impakt.protocol.base import ProtocolResult
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
TEMPLATE_DIR = Path(__file__).parent / "templates"
|
||||
|
||||
|
||||
def _get_jinja_env() -> Any:
|
||||
"""Create Jinja2 environment with template directory."""
|
||||
from jinja2 import Environment, FileSystemLoader, select_autoescape
|
||||
|
||||
return Environment(
|
||||
loader=FileSystemLoader(str(TEMPLATE_DIR)),
|
||||
autoescape=select_autoescape(["html"]),
|
||||
)
|
||||
|
||||
|
||||
def generate_plot_sheet(
|
||||
spec: PlotSpec,
|
||||
output_path: str | Path,
|
||||
metadata: TestMetadata | None = None,
|
||||
format: str = "pdf",
|
||||
) -> None:
|
||||
"""Generate a single-page plot sheet.
|
||||
|
||||
One plot per page with metadata header.
|
||||
"""
|
||||
engine = PlotEngine()
|
||||
|
||||
if format == "html":
|
||||
html = engine.to_html(spec)
|
||||
Path(output_path).write_text(html, encoding="utf-8")
|
||||
elif format == "pdf":
|
||||
# Generate plot as SVG, embed in HTML, render to PDF
|
||||
fig = engine.render(spec)
|
||||
plot_html = fig.to_html(include_plotlyjs="cdn", full_html=False)
|
||||
|
||||
try:
|
||||
env = _get_jinja_env()
|
||||
template = env.get_template("plot_sheet.html")
|
||||
full_html = template.render(
|
||||
title=spec.title or "Impakt Plot",
|
||||
plot_html=plot_html,
|
||||
metadata=metadata,
|
||||
)
|
||||
except Exception:
|
||||
# Fallback: just use the raw plotly HTML
|
||||
full_html = f"""
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head><title>{spec.title or "Impakt Plot"}</title></head>
|
||||
<body>
|
||||
<h1>{spec.title}</h1>
|
||||
{plot_html}
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
_html_to_pdf(full_html, output_path)
|
||||
else:
|
||||
# Static image
|
||||
image_bytes = engine.to_image(spec, format=format)
|
||||
Path(output_path).write_bytes(image_bytes)
|
||||
|
||||
|
||||
def generate_injury_summary(
|
||||
criteria: dict[str, CriterionResult],
|
||||
output_path: str | Path,
|
||||
metadata: TestMetadata | None = None,
|
||||
format: str = "pdf",
|
||||
) -> None:
|
||||
"""Generate an injury criteria summary report."""
|
||||
try:
|
||||
env = _get_jinja_env()
|
||||
template = env.get_template("injury_summary.html")
|
||||
html = template.render(
|
||||
criteria=criteria,
|
||||
metadata=metadata,
|
||||
)
|
||||
except Exception:
|
||||
# Fallback HTML
|
||||
rows = ""
|
||||
for name, result in criteria.items():
|
||||
rows += f"<tr><td>{name}</td><td>{result.value:.2f}</td><td>{result.unit}</td></tr>"
|
||||
html = f"""
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head><title>Injury Summary</title>
|
||||
<style>
|
||||
body {{ font-family: Arial, sans-serif; margin: 40px; }}
|
||||
table {{ border-collapse: collapse; width: 100%; }}
|
||||
th, td {{ border: 1px solid #ddd; padding: 8px; text-align: left; }}
|
||||
th {{ background-color: #f4f4f4; }}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Injury Criteria Summary</h1>
|
||||
<table>
|
||||
<tr><th>Criterion</th><th>Value</th><th>Unit</th></tr>
|
||||
{rows}
|
||||
</table>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
if format == "html":
|
||||
Path(output_path).write_text(html, encoding="utf-8")
|
||||
else:
|
||||
_html_to_pdf(html, output_path)
|
||||
|
||||
|
||||
def generate_protocol_report(
|
||||
result: ProtocolResult,
|
||||
output_path: str | Path,
|
||||
metadata: TestMetadata | None = None,
|
||||
) -> None:
|
||||
"""Generate a full protocol rating report."""
|
||||
try:
|
||||
env = _get_jinja_env()
|
||||
template = env.get_template("protocol_report.html")
|
||||
html = template.render(
|
||||
result=result,
|
||||
metadata=metadata,
|
||||
)
|
||||
except Exception:
|
||||
# Fallback
|
||||
html = _fallback_protocol_html(result, metadata)
|
||||
|
||||
_html_to_pdf(html, output_path)
|
||||
|
||||
|
||||
def _fallback_protocol_html(
|
||||
result: ProtocolResult,
|
||||
metadata: TestMetadata | None = None,
|
||||
) -> str:
|
||||
"""Generate fallback HTML for protocol report."""
|
||||
test_info = ""
|
||||
if metadata:
|
||||
test_info = f"""
|
||||
<div class="test-info">
|
||||
<p><strong>Test:</strong> {metadata.test_number}</p>
|
||||
<p><strong>Vehicle:</strong> {metadata.vehicle.year} {metadata.vehicle.make} {metadata.vehicle.model}</p>
|
||||
<p><strong>Dummy:</strong> {metadata.dummy.dummy_type} ({metadata.dummy.position})</p>
|
||||
</div>
|
||||
"""
|
||||
|
||||
rows = ""
|
||||
for rs in result.region_scores:
|
||||
color_badge = ""
|
||||
if rs.color:
|
||||
color_badge = (
|
||||
f'<span class="badge" style="background:{rs.color.value}">{rs.color.value}</span>'
|
||||
)
|
||||
elif rs.rating:
|
||||
rating_colors = {
|
||||
"good": "#2ecc71",
|
||||
"acceptable": "#f1c40f",
|
||||
"marginal": "#e67e22",
|
||||
"poor": "#e74c3c",
|
||||
}
|
||||
bg = rating_colors.get(rs.rating.value, "#bdc3c7")
|
||||
color_badge = f'<span class="badge" style="background:{bg}">{rs.rating.value}</span>'
|
||||
|
||||
points_str = f"{rs.points:.1f}/{rs.max_points:.1f}" if rs.max_points > 0 else ""
|
||||
rows += f"<tr><td>{rs.region}</td><td>{rs.criterion}</td><td>{rs.value:.2f} {rs.unit}</td><td>{color_badge}</td><td>{points_str}</td></tr>"
|
||||
|
||||
stars_display = ""
|
||||
if result.stars is not None:
|
||||
stars_display = "★" * result.stars + "☆" * (5 - result.stars)
|
||||
|
||||
return f"""
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>{result.protocol} {result.version} Report</title>
|
||||
<style>
|
||||
body {{ font-family: Arial, sans-serif; margin: 40px; color: #333; }}
|
||||
h1 {{ color: #2c3e50; }}
|
||||
.stars {{ font-size: 32px; color: #f1c40f; }}
|
||||
table {{ border-collapse: collapse; width: 100%; margin-top: 20px; }}
|
||||
th, td {{ border: 1px solid #ddd; padding: 10px; text-align: left; }}
|
||||
th {{ background-color: #2c3e50; color: white; }}
|
||||
.badge {{ padding: 4px 8px; border-radius: 4px; color: white; font-weight: bold; }}
|
||||
.test-info {{ background: #f8f9fa; padding: 15px; border-radius: 5px; margin: 15px 0; }}
|
||||
.summary {{ font-size: 18px; margin: 20px 0; }}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>{result.protocol} {result.version} — Rating Report</h1>
|
||||
{test_info}
|
||||
<div class="summary">
|
||||
<p><span class="stars">{stars_display}</span></p>
|
||||
<p><strong>Overall:</strong> {result.overall_rating}</p>
|
||||
<p><strong>Score:</strong> {result.total_points:.1f}/{result.max_points:.1f} ({result.percentage:.0f}%)</p>
|
||||
</div>
|
||||
<table>
|
||||
<tr><th>Body Region</th><th>Criterion</th><th>Value</th><th>Rating</th><th>Points</th></tr>
|
||||
{rows}
|
||||
</table>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
|
||||
|
||||
def _html_to_pdf(html: str, output_path: str | Path) -> None:
|
||||
"""Render HTML to PDF using WeasyPrint."""
|
||||
try:
|
||||
from weasyprint import HTML
|
||||
|
||||
HTML(string=html).write_pdf(str(output_path))
|
||||
logger.info("PDF generated: %s", output_path)
|
||||
except ImportError:
|
||||
# Fallback: save as HTML
|
||||
html_path = Path(output_path).with_suffix(".html")
|
||||
html_path.write_text(html, encoding="utf-8")
|
||||
logger.warning(
|
||||
"WeasyPrint not available. Saved as HTML: %s. Install with: pip install weasyprint",
|
||||
html_path,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error("PDF generation failed: %s", e)
|
||||
# Save HTML as fallback
|
||||
html_path = Path(output_path).with_suffix(".html")
|
||||
html_path.write_text(html, encoding="utf-8")
|
||||
57
src/impakt/report/templates/injury_summary.html
Normal file
57
src/impakt/report/templates/injury_summary.html
Normal file
@@ -0,0 +1,57 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Injury Criteria Summary</title>
|
||||
<style>
|
||||
@page { size: portrait; margin: 20mm; }
|
||||
body { font-family: Arial, Helvetica, sans-serif; margin: 0; padding: 20px; color: #333; }
|
||||
h1 { color: #2c3e50; font-size: 22px; border-bottom: 2px solid #2c3e50; padding-bottom: 10px; }
|
||||
.test-info { background: #f8f9fa; padding: 15px; border-radius: 5px; margin: 15px 0; font-size: 13px; }
|
||||
table { border-collapse: collapse; width: 100%; margin-top: 20px; }
|
||||
th { background-color: #2c3e50; color: white; padding: 10px; text-align: left; font-size: 12px; }
|
||||
td { border: 1px solid #ddd; padding: 8px; font-size: 12px; }
|
||||
tr:nth-child(even) { background-color: #f8f9fa; }
|
||||
.value { font-weight: bold; font-family: 'Courier New', monospace; }
|
||||
.footer { margin-top: 30px; padding-top: 8px; border-top: 1px solid #ddd; font-size: 9px; color: #999; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Injury Criteria Summary</h1>
|
||||
|
||||
{% if metadata %}
|
||||
<div class="test-info">
|
||||
{% if metadata.test_number %}<p><strong>Test:</strong> {{ metadata.test_number }}</p>{% endif %}
|
||||
{% if metadata.vehicle.make %}<p><strong>Vehicle:</strong> {{ metadata.vehicle.year }} {{ metadata.vehicle.make }} {{ metadata.vehicle.model }}</p>{% endif %}
|
||||
{% if metadata.dummy.dummy_type %}<p><strong>Dummy:</strong> {{ metadata.dummy.dummy_type }} ({{ metadata.dummy.position }})</p>{% endif %}
|
||||
{% if metadata.impact.test_type %}<p><strong>Test Type:</strong> {{ metadata.impact.test_type }} @ {{ metadata.impact.speed_kmh }} km/h</p>{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Criterion</th>
|
||||
<th>Body Region</th>
|
||||
<th>Value</th>
|
||||
<th>Unit</th>
|
||||
<th>Time of Peak</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for name, result in criteria.items() %}
|
||||
<tr>
|
||||
<td>{{ result.criterion }}</td>
|
||||
<td>{{ result.body_region }}</td>
|
||||
<td class="value">{{ "%.2f"|format(result.value) }}</td>
|
||||
<td>{{ result.unit }}</td>
|
||||
<td>{% if result.time_of_peak is not none %}{{ "%.4f"|format(result.time_of_peak) }}s{% endif %}</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<div class="footer">
|
||||
Generated by Impakt
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
36
src/impakt/report/templates/plot_sheet.html
Normal file
36
src/impakt/report/templates/plot_sheet.html
Normal file
@@ -0,0 +1,36 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>{{ title }}</title>
|
||||
<style>
|
||||
@page { size: landscape; margin: 15mm; }
|
||||
body { font-family: Arial, Helvetica, sans-serif; margin: 0; padding: 20px; color: #333; }
|
||||
.header { border-bottom: 2px solid #2c3e50; padding-bottom: 10px; margin-bottom: 15px; }
|
||||
.header h1 { margin: 0; font-size: 18px; color: #2c3e50; }
|
||||
.meta-row { display: flex; gap: 30px; font-size: 11px; color: #666; margin-top: 5px; }
|
||||
.plot-container { width: 100%; }
|
||||
.footer { margin-top: 15px; padding-top: 8px; border-top: 1px solid #ddd; font-size: 9px; color: #999; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>{{ title }}</h1>
|
||||
{% if metadata %}
|
||||
<div class="meta-row">
|
||||
{% if metadata.test_number %}<span>Test: {{ metadata.test_number }}</span>{% endif %}
|
||||
{% if metadata.vehicle.make %}<span>Vehicle: {{ metadata.vehicle.year }} {{ metadata.vehicle.make }} {{ metadata.vehicle.model }}</span>{% endif %}
|
||||
{% if metadata.dummy.dummy_type %}<span>Dummy: {{ metadata.dummy.dummy_type }}</span>{% endif %}
|
||||
{% if metadata.impact.test_type %}<span>Type: {{ metadata.impact.test_type }}</span>{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<div class="plot-container">
|
||||
{{ plot_html | safe }}
|
||||
</div>
|
||||
|
||||
<div class="footer">
|
||||
Generated by Impakt
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
93
src/impakt/report/templates/protocol_report.html
Normal file
93
src/impakt/report/templates/protocol_report.html
Normal file
@@ -0,0 +1,93 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>{{ result.protocol }} {{ result.version }} Report</title>
|
||||
<style>
|
||||
@page { size: portrait; margin: 20mm; }
|
||||
body { font-family: Arial, Helvetica, sans-serif; margin: 0; padding: 20px; color: #333; }
|
||||
h1 { color: #2c3e50; font-size: 24px; margin-bottom: 5px; }
|
||||
h2 { color: #34495e; font-size: 16px; margin-top: 25px; }
|
||||
.subtitle { font-size: 14px; color: #7f8c8d; margin-bottom: 20px; }
|
||||
.stars { font-size: 36px; color: #f1c40f; letter-spacing: 4px; }
|
||||
.overall { background: #2c3e50; color: white; padding: 20px; border-radius: 8px; margin: 20px 0; text-align: center; }
|
||||
.overall h2 { color: white; margin: 0; }
|
||||
.overall .score { font-size: 28px; font-weight: bold; margin: 10px 0; }
|
||||
.test-info { background: #f8f9fa; padding: 15px; border-radius: 5px; margin: 15px 0; font-size: 12px; }
|
||||
table { border-collapse: collapse; width: 100%; margin-top: 15px; }
|
||||
th { background-color: #2c3e50; color: white; padding: 10px; text-align: left; font-size: 11px; text-transform: uppercase; }
|
||||
td { border: 1px solid #ddd; padding: 8px; font-size: 12px; }
|
||||
tr:nth-child(even) { background-color: #f8f9fa; }
|
||||
.badge { display: inline-block; padding: 3px 10px; border-radius: 3px; color: white; font-weight: bold; font-size: 11px; text-transform: uppercase; min-width: 60px; text-align: center; }
|
||||
.badge-green { background-color: #2ecc71; }
|
||||
.badge-yellow { background-color: #f1c40f; color: #333; }
|
||||
.badge-orange { background-color: #e67e22; }
|
||||
.badge-brown { background-color: #8B4513; }
|
||||
.badge-red { background-color: #e74c3c; }
|
||||
.badge-good { background-color: #2ecc71; }
|
||||
.badge-acceptable { background-color: #f1c40f; color: #333; }
|
||||
.badge-marginal { background-color: #e67e22; }
|
||||
.badge-poor { background-color: #e74c3c; }
|
||||
.value { font-family: 'Courier New', monospace; font-weight: bold; }
|
||||
.footer { margin-top: 30px; padding-top: 10px; border-top: 1px solid #ddd; font-size: 9px; color: #999; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>{{ result.protocol }} {{ result.version }}</h1>
|
||||
<div class="subtitle">Rating Report</div>
|
||||
|
||||
{% if metadata %}
|
||||
<div class="test-info">
|
||||
{% if metadata.test_number %}<strong>Test:</strong> {{ metadata.test_number }} | {% endif %}
|
||||
{% if metadata.vehicle.make %}<strong>Vehicle:</strong> {{ metadata.vehicle.year }} {{ metadata.vehicle.make }} {{ metadata.vehicle.model }} | {% endif %}
|
||||
{% if metadata.dummy.dummy_type %}<strong>Dummy:</strong> {{ metadata.dummy.dummy_type }} | {% endif %}
|
||||
{% if metadata.impact.test_type %}<strong>Type:</strong> {{ metadata.impact.test_type }}{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<div class="overall">
|
||||
{% if result.stars is not none %}
|
||||
<div class="stars">
|
||||
{% for i in range(result.stars) %}★{% endfor %}{% for i in range(5 - result.stars) %}☆{% endfor %}
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="score">{{ result.overall_rating }}</div>
|
||||
{% if result.max_points > 0 %}
|
||||
<div>{{ "%.1f"|format(result.total_points) }} / {{ "%.1f"|format(result.max_points) }} points ({{ "%.0f"|format(result.percentage) }}%)</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<h2>Body Region Results</h2>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Region</th>
|
||||
<th>Criterion</th>
|
||||
<th>Value</th>
|
||||
<th>Rating</th>
|
||||
<th>Points</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for rs in result.region_scores %}
|
||||
<tr>
|
||||
<td>{{ rs.region }}</td>
|
||||
<td>{{ rs.criterion }}</td>
|
||||
<td class="value">{{ "%.2f"|format(rs.value) }} {{ rs.unit }}</td>
|
||||
<td>
|
||||
{% if rs.color %}
|
||||
<span class="badge badge-{{ rs.color.value }}">{{ rs.color.value }}</span>
|
||||
{% elif rs.rating %}
|
||||
<span class="badge badge-{{ rs.rating.value }}">{{ rs.rating.value }}</span>
|
||||
{% endif %}
|
||||
</td>
|
||||
<td>{% if rs.max_points > 0 %}{{ "%.1f"|format(rs.points) }}/{{ "%.1f"|format(rs.max_points) }}{% endif %}</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<div class="footer">
|
||||
Generated by Impakt — {{ result.protocol }} {{ result.version }}
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
1
src/impakt/script/__init__.py
Normal file
1
src/impakt/script/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Scripting API and CLI."""
|
||||
BIN
src/impakt/script/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
src/impakt/script/__pycache__/__init__.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/script/__pycache__/__init__.cpython-314.pyc
Normal file
BIN
src/impakt/script/__pycache__/__init__.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/script/__pycache__/api.cpython-312.pyc
Normal file
BIN
src/impakt/script/__pycache__/api.cpython-312.pyc
Normal file
Binary file not shown.
BIN
src/impakt/script/__pycache__/api.cpython-314.pyc
Normal file
BIN
src/impakt/script/__pycache__/api.cpython-314.pyc
Normal file
Binary file not shown.
BIN
src/impakt/script/__pycache__/cli.cpython-312.pyc
Normal file
BIN
src/impakt/script/__pycache__/cli.cpython-312.pyc
Normal file
Binary file not shown.
329
src/impakt/script/api.py
Normal file
329
src/impakt/script/api.py
Normal file
@@ -0,0 +1,329 @@
|
||||
"""Top-level scripting API.
|
||||
|
||||
Provides the ``Session`` and ``Template`` classes that serve as the
|
||||
primary entry points for both scripting and the web UI.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
from impakt.channel.model import Channel, ChannelGroup, TestData, TestMetadata
|
||||
from impakt.criteria.base import CriterionResult
|
||||
from impakt.io.mme import MMEReader
|
||||
from impakt.io.reader import ReaderRegistry, get_registry, register_reader
|
||||
from impakt.plot.engine import PlotEngine, cursor_values
|
||||
from impakt.plot.spec import ChannelRef, CursorValues, PlotSpec, PlotStyle
|
||||
from impakt.protocol.base import ProtocolResult
|
||||
from impakt.template.library import TemplateLibrary
|
||||
from impakt.template.model import SessionState, TemplateSpec
|
||||
from impakt.template.session import SessionManager
|
||||
from impakt.transform.cfc import CFCFilter
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Register the built-in MME reader
|
||||
register_reader(MMEReader())
|
||||
|
||||
|
||||
class Session:
|
||||
"""A loaded crash test session.
|
||||
|
||||
Wraps TestData with session state, template binding, and
|
||||
convenience methods for transforms, criteria, and plotting.
|
||||
|
||||
Usage:
|
||||
test = Session.open("/path/to/test_001")
|
||||
ch = test.channel("11HEAD0000ACXA")
|
||||
filtered = ch.transform.cfc(1000)
|
||||
"""
|
||||
|
||||
def __init__(self, test_data: TestData, session_mgr: SessionManager | None = None) -> None:
|
||||
self._data = test_data
|
||||
self._session_mgr = session_mgr or (
|
||||
SessionManager(test_data.path) if test_data.path else None
|
||||
)
|
||||
self._template: TemplateSpec | None = None
|
||||
|
||||
@classmethod
|
||||
def open(cls, path: str | Path) -> Session:
|
||||
"""Open a crash test from a path.
|
||||
|
||||
Auto-detects the format using the reader registry.
|
||||
"""
|
||||
path = Path(path).resolve()
|
||||
registry = get_registry()
|
||||
test_data = registry.read(path)
|
||||
|
||||
session_mgr = SessionManager(path)
|
||||
session = cls(test_data, session_mgr)
|
||||
|
||||
# Load existing session state if available
|
||||
if session_mgr.has_session:
|
||||
state = session_mgr.state
|
||||
if state.template_name:
|
||||
try:
|
||||
lib = TemplateLibrary()
|
||||
session._template = lib.get(state.template_name)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
return session
|
||||
|
||||
@property
|
||||
def test_id(self) -> str:
|
||||
return self._data.test_id
|
||||
|
||||
@property
|
||||
def metadata(self) -> TestMetadata:
|
||||
return self._data.metadata
|
||||
|
||||
@property
|
||||
def data(self) -> TestData:
|
||||
"""Underlying TestData object."""
|
||||
return self._data
|
||||
|
||||
@property
|
||||
def channel_names(self) -> list[str]:
|
||||
return self._data.channel_names
|
||||
|
||||
@property
|
||||
def template(self) -> TemplateSpec | None:
|
||||
return self._template
|
||||
|
||||
# ----- Channel access -----
|
||||
|
||||
def channel(self, name: str) -> ChannelHandle:
|
||||
"""Get a channel by name, wrapped with transform convenience methods."""
|
||||
ch = self._data.get(name)
|
||||
return ChannelHandle(ch)
|
||||
|
||||
def find(self, pattern: str) -> list[Channel]:
|
||||
"""Find channels matching a pattern."""
|
||||
return self._data.find(pattern)
|
||||
|
||||
def group(self, pattern: str) -> ChannelGroup:
|
||||
"""Find a channel group (X/Y/Z family)."""
|
||||
return self._data.group(pattern)
|
||||
|
||||
def groups(self) -> dict[str, ChannelGroup]:
|
||||
"""All auto-detected channel groups."""
|
||||
return self._data.groups()
|
||||
|
||||
def channel_tree(self) -> dict[str, dict[str, dict[str, list[Channel]]]]:
|
||||
"""Hierarchical channel tree for UI display."""
|
||||
return self._data.channel_tree()
|
||||
|
||||
# ----- Template -----
|
||||
|
||||
def apply_template(self, name_or_spec: str | TemplateSpec) -> None:
|
||||
"""Apply a template to this session."""
|
||||
if isinstance(name_or_spec, str):
|
||||
lib = TemplateLibrary()
|
||||
spec = lib.get(name_or_spec)
|
||||
else:
|
||||
spec = name_or_spec
|
||||
|
||||
self._template = spec
|
||||
if self._session_mgr:
|
||||
self._session_mgr.apply_template(spec)
|
||||
|
||||
# ----- Quick plotting -----
|
||||
|
||||
def plot(
|
||||
self,
|
||||
*channel_names: str,
|
||||
title: str = "",
|
||||
cfc: int | None = None,
|
||||
) -> Any:
|
||||
"""Quick plot of one or more channels.
|
||||
|
||||
Returns a Plotly figure.
|
||||
"""
|
||||
refs: list[ChannelRef] = []
|
||||
for name in channel_names:
|
||||
ch = self._data.get(name)
|
||||
if cfc is not None:
|
||||
ch = CFCFilter(cfc_class=cfc).apply(ch)
|
||||
refs.append(
|
||||
ChannelRef(
|
||||
test_id=self.test_id,
|
||||
channel_name=name,
|
||||
channel=ch,
|
||||
style=PlotStyle(label=ch.code.short_label if ch.code.is_valid else name),
|
||||
)
|
||||
)
|
||||
|
||||
spec = PlotSpec(
|
||||
channels=refs,
|
||||
title=title or ", ".join(channel_names),
|
||||
y_label=refs[0].channel.unit if refs and refs[0].channel else "",
|
||||
)
|
||||
|
||||
engine = PlotEngine()
|
||||
return engine.render(spec)
|
||||
|
||||
# ----- Session state -----
|
||||
|
||||
def save(self) -> None:
|
||||
"""Save current session state."""
|
||||
if self._session_mgr:
|
||||
self._session_mgr.save()
|
||||
|
||||
def __repr__(self) -> str:
|
||||
tmpl = f", template={self._template.name}" if self._template else ""
|
||||
return f"Session({self.test_id}, {len(self._data)} channels{tmpl})"
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._data)
|
||||
|
||||
def __contains__(self, name: str) -> bool:
|
||||
return name in self._data
|
||||
|
||||
|
||||
class ChannelHandle:
|
||||
"""Wrapper around a Channel providing fluent transform access.
|
||||
|
||||
Example:
|
||||
ch = session.channel("11HEAD0000ACXA")
|
||||
filtered = ch.transform.cfc(1000)
|
||||
aligned = ch.transform.x_align(method="threshold", threshold_value=5.0)
|
||||
"""
|
||||
|
||||
def __init__(self, channel: Channel) -> None:
|
||||
self._channel = channel
|
||||
self.transform = TransformProxy(channel)
|
||||
|
||||
@property
|
||||
def raw(self) -> Channel:
|
||||
"""The underlying Channel object."""
|
||||
return self._channel
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return self._channel.name
|
||||
|
||||
@property
|
||||
def data(self) -> np.ndarray:
|
||||
return self._channel.data
|
||||
|
||||
@property
|
||||
def time(self) -> np.ndarray:
|
||||
return self._channel.time
|
||||
|
||||
@property
|
||||
def unit(self) -> str:
|
||||
return self._channel.unit
|
||||
|
||||
def value_at(self, t: float) -> float:
|
||||
return self._channel.value_at(t)
|
||||
|
||||
def plot(self, title: str = "") -> Any:
|
||||
"""Quick plot of this channel."""
|
||||
spec = PlotSpec(
|
||||
channels=[ChannelRef(channel=self._channel)],
|
||||
title=title or self._channel.code.description,
|
||||
y_label=self._channel.unit,
|
||||
)
|
||||
return PlotEngine().render(spec)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._channel)
|
||||
|
||||
|
||||
class TransformProxy:
|
||||
"""Fluent transform interface for a channel.
|
||||
|
||||
Each method returns a new Channel (non-destructive).
|
||||
"""
|
||||
|
||||
def __init__(self, channel: Channel) -> None:
|
||||
self._channel = channel
|
||||
|
||||
def cfc(self, cfc_class: int) -> Channel:
|
||||
"""Apply CFC filter."""
|
||||
from impakt.transform.cfc import CFCFilter
|
||||
|
||||
return CFCFilter(cfc_class=cfc_class).apply(self._channel)
|
||||
|
||||
def x_align(
|
||||
self, method: str = "manual", reference_time: float = 0.0, **kwargs: Any
|
||||
) -> Channel:
|
||||
"""Apply time-zero alignment."""
|
||||
from impakt.transform.align import XAlign
|
||||
|
||||
return XAlign(method=method, reference_time=reference_time, **kwargs).apply(self._channel)
|
||||
|
||||
def y_align(self, window: tuple[float, float] | None = None) -> Channel:
|
||||
"""Apply Y-axis zero correction."""
|
||||
from impakt.transform.align import YAlign
|
||||
|
||||
start, end = window if window else (None, None)
|
||||
return YAlign(window_start=start, window_end=end).apply(self._channel)
|
||||
|
||||
def trim(self, t_start: float | None = None, t_end: float | None = None) -> Channel:
|
||||
"""Trim to a time range."""
|
||||
from impakt.transform.resample import Trim
|
||||
|
||||
return Trim(t_start=t_start, t_end=t_end).apply(self._channel)
|
||||
|
||||
def resample(self, target_rate: float) -> Channel:
|
||||
"""Resample to a new rate."""
|
||||
from impakt.transform.resample import Resample
|
||||
|
||||
return Resample(target_rate=target_rate).apply(self._channel)
|
||||
|
||||
|
||||
class Template:
|
||||
"""Template management interface."""
|
||||
|
||||
@staticmethod
|
||||
def load(name: str) -> TemplateSpec:
|
||||
"""Load a template from the global library by name."""
|
||||
lib = TemplateLibrary()
|
||||
return lib.get(name)
|
||||
|
||||
@staticmethod
|
||||
def list() -> list[str]:
|
||||
"""List available templates in the global library."""
|
||||
lib = TemplateLibrary()
|
||||
return lib.list()
|
||||
|
||||
@staticmethod
|
||||
def save(spec: TemplateSpec) -> Path:
|
||||
"""Save a template to the global library."""
|
||||
lib = TemplateLibrary()
|
||||
return lib.save(spec)
|
||||
|
||||
@staticmethod
|
||||
def create(
|
||||
name: str,
|
||||
plots: list[dict[str, Any]] | None = None,
|
||||
criteria: list[str] | None = None,
|
||||
protocol: str = "",
|
||||
**kwargs: Any,
|
||||
) -> TemplateSpec:
|
||||
"""Create a new template spec."""
|
||||
from impakt.template.model import PlotDefinition
|
||||
|
||||
plot_defs = []
|
||||
for p in plots or []:
|
||||
plot_defs.append(
|
||||
PlotDefinition(
|
||||
title=p.get("title", ""),
|
||||
channel_patterns=p.get("channels", []),
|
||||
transforms=p.get("transforms", []),
|
||||
)
|
||||
)
|
||||
|
||||
return TemplateSpec(
|
||||
name=name,
|
||||
plots=plot_defs,
|
||||
criteria=criteria or [],
|
||||
protocol=protocol,
|
||||
**kwargs,
|
||||
)
|
||||
140
src/impakt/script/cli.py
Normal file
140
src/impakt/script/cli.py
Normal file
@@ -0,0 +1,140 @@
|
||||
"""Command-line interface for Impakt."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> None:
|
||||
"""Main CLI entry point."""
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="impakt",
|
||||
description="Impakt — Crash test data analysis, visualization, and reporting",
|
||||
)
|
||||
|
||||
subparsers = parser.add_subparsers(dest="command", help="Available commands")
|
||||
|
||||
# --- serve ---
|
||||
serve_parser = subparsers.add_parser("serve", help="Launch the web UI")
|
||||
serve_parser.add_argument("path", help="Path to test data directory")
|
||||
serve_parser.add_argument("--template", "-t", help="Template to apply")
|
||||
serve_parser.add_argument("--port", "-p", type=int, default=8050, help="Port (default: 8050)")
|
||||
serve_parser.add_argument("--debug", action="store_true", help="Enable debug mode")
|
||||
|
||||
# --- info ---
|
||||
info_parser = subparsers.add_parser("info", help="Show test metadata")
|
||||
info_parser.add_argument("path", help="Path to test data directory")
|
||||
|
||||
# --- channels ---
|
||||
ch_parser = subparsers.add_parser("channels", help="List channels")
|
||||
ch_parser.add_argument("path", help="Path to test data directory")
|
||||
ch_parser.add_argument("--pattern", "-p", help="Filter by pattern")
|
||||
ch_parser.add_argument("--tree", action="store_true", help="Show as tree")
|
||||
|
||||
# --- evaluate ---
|
||||
eval_parser = subparsers.add_parser("evaluate", help="Run injury criteria evaluation")
|
||||
eval_parser.add_argument("path", help="Path to test data directory")
|
||||
eval_parser.add_argument(
|
||||
"--protocol", choices=["euro_ncap", "us_ncap", "iihs"], default="euro_ncap"
|
||||
)
|
||||
eval_parser.add_argument("--output", "-o", help="Output PDF path")
|
||||
|
||||
# --- report ---
|
||||
report_parser = subparsers.add_parser("report", help="Generate a report")
|
||||
report_parser.add_argument("path", help="Path to test data directory")
|
||||
report_parser.add_argument("--template", "-t", help="Report template")
|
||||
report_parser.add_argument("--output", "-o", required=True, help="Output PDF path")
|
||||
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
if args.command == "serve":
|
||||
_cmd_serve(args)
|
||||
elif args.command == "info":
|
||||
_cmd_info(args)
|
||||
elif args.command == "channels":
|
||||
_cmd_channels(args)
|
||||
elif args.command == "evaluate":
|
||||
_cmd_evaluate(args)
|
||||
elif args.command == "report":
|
||||
_cmd_report(args)
|
||||
else:
|
||||
parser.print_help()
|
||||
|
||||
|
||||
def _cmd_serve(args: argparse.Namespace) -> None:
|
||||
from impakt.script.api import Session
|
||||
from impakt.web.app import create_app
|
||||
|
||||
session = Session.open(args.path)
|
||||
if args.template:
|
||||
session.apply_template(args.template)
|
||||
|
||||
app = create_app(session)
|
||||
print(f"Impakt web UI running at http://localhost:{args.port}")
|
||||
app.run(debug=args.debug, port=args.port)
|
||||
|
||||
|
||||
def _cmd_info(args: argparse.Namespace) -> None:
|
||||
from impakt.script.api import Session
|
||||
|
||||
session = Session.open(args.path)
|
||||
m = session.metadata
|
||||
|
||||
print(f"Test: {m.test_number}")
|
||||
if m.test_date:
|
||||
print(f"Date: {m.test_date}")
|
||||
if m.test_facility:
|
||||
print(f"Facility: {m.test_facility}")
|
||||
if m.vehicle.make:
|
||||
print(f"Vehicle: {m.vehicle.year} {m.vehicle.make} {m.vehicle.model}")
|
||||
if m.dummy.dummy_type:
|
||||
print(f"Dummy: {m.dummy.dummy_type} ({m.dummy.position})")
|
||||
if m.impact.test_type:
|
||||
print(f"Impact: {m.impact.test_type} @ {m.impact.speed_kmh} km/h")
|
||||
print(f"Channels: {len(session)}")
|
||||
print(f"Groups: {len(session.groups())}")
|
||||
|
||||
|
||||
def _cmd_channels(args: argparse.Namespace) -> None:
|
||||
from impakt.script.api import Session
|
||||
|
||||
session = Session.open(args.path)
|
||||
|
||||
if args.tree:
|
||||
tree = session.channel_tree()
|
||||
for obj, locations in sorted(tree.items()):
|
||||
print(f"\n{obj}")
|
||||
for loc, measurements in sorted(locations.items()):
|
||||
print(f" {loc}")
|
||||
for meas, channels in sorted(measurements.items()):
|
||||
print(f" {meas}")
|
||||
for ch in channels:
|
||||
print(f" {ch.name} ({ch.unit}, {ch.n_samples} pts)")
|
||||
else:
|
||||
channels = session.find(args.pattern) if args.pattern else list(session.data)
|
||||
for ch in sorted(channels, key=lambda c: c.name):
|
||||
desc = ch.code.description if ch.code.is_valid else ch.name
|
||||
print(f" {ch.name:<20} {desc:<50} {ch.unit:<10} {ch.n_samples} pts")
|
||||
|
||||
|
||||
def _cmd_evaluate(args: argparse.Namespace) -> None:
|
||||
from impakt.script.api import Session
|
||||
from impakt.criteria import hic, clip_3ms, nij, chest_deflection, femur_load
|
||||
|
||||
session = Session.open(args.path)
|
||||
|
||||
# Auto-detect channels and compute criteria
|
||||
# (This is a simplified version — a real implementation would use
|
||||
# the template system to know which channels to use)
|
||||
print(f"Evaluating {session.test_id} with {args.protocol}...")
|
||||
print("(Full auto-detection not yet implemented — use the scripting API)")
|
||||
|
||||
|
||||
def _cmd_report(args: argparse.Namespace) -> None:
|
||||
print("Report generation not yet available via CLI. Use the scripting API.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1
src/impakt/template/__init__.py
Normal file
1
src/impakt/template/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Template and session management."""
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user