Compare commits

..

5 Commits

16 changed files with 2003 additions and 95 deletions

232
README.md
View File

@@ -1,3 +1,233 @@
This project was a test run using Cursor and "vibe coding" to create a full object detection project. I wrote almost no lines of code to get to this point, which kind of works. The technology is definitely impressive, but really feels more suited to things that can be developed in a more test-driven way. I'll update this later with other things I've learned along the way.
I stopped this project here because it got trapped in a doom loop not being able to fix a bug in the eval code and I wanted this to be an investigation into how well I could do with very low intervention.
# Torchvision Vibecoding Project
A project demonstrating finetuning torchvision object detection models, built with the help of Vibecoding AI.
A PyTorch-based object detection project using Mask R-CNN to detect pedestrians in the Penn-Fudan dataset. This project demonstrates model training, evaluation, and visualization with PyTorch and Torchvision.
## Table of Contents
- [Prerequisites](#prerequisites)
- [Project Setup](#project-setup)
- [Project Structure](#project-structure)
- [Data Preparation](#data-preparation)
- [Configuration](#configuration)
- [Training](#training)
- [Evaluation](#evaluation)
- [Visualization](#visualization)
- [Testing](#testing)
- [Debugging](#debugging)
## Prerequisites
- Python 3.10+
- [uv](https://github.com/astral-sh/uv) for package management
- CUDA-compatible GPU (optional but recommended)
## Project Setup
1. Clone the repository:
```bash
git clone https://github.com/yourusername/torchvision-vibecoding-project.git
cd torchvision-vibecoding-project
```
2. Set up the environment with uv:
```bash
uv init
uv sync
```
3. Install development dependencies:
```bash
uv add ruff pytest matplotlib
```
4. Set up pre-commit hooks:
```bash
pre-commit install
```
## Project Structure
```
├── configs/ # Configuration files
│ ├── base_config.py # Base configuration with defaults
│ ├── debug_config.py # Configuration for quick debugging
│ └── pennfudan_maskrcnn_config.py # Configuration for Penn-Fudan dataset
├── data/ # Dataset directory (not tracked by git)
│ └── PennFudanPed/ # Penn-Fudan pedestrian dataset
├── models/ # Model definitions
│ └── detection.py # Mask R-CNN model definition
├── outputs/ # Training outputs (not tracked by git)
│ └── <config_name>/ # Named by configuration
│ ├── checkpoints/ # Model checkpoints
│ └── *.log # Log files
├── scripts/ # Utility scripts
│ ├── download_data.sh # Script to download dataset
│ ├── test_model.py # Script for quick model testing
│ └── visualize_predictions.py # Script for prediction visualization
├── tests/ # Unit tests
│ ├── conftest.py # Test fixtures
│ ├── test_data_utils.py # Tests for data utilities
│ ├── test_model.py # Tests for model functionality
│ └── test_visualization.py # Tests for visualization
├── utils/ # Utility modules
│ ├── common.py # Common functionality
│ ├── data_utils.py # Dataset handling
│ ├── eval_utils.py # Evaluation functions
│ └── log_utils.py # Logging utilities
├── train.py # Training script
├── test.py # Evaluation script
├── pyproject.toml # Project dependencies and configuration
├── .pre-commit-config.yaml # Pre-commit configuration
└── README.md # This file
```
## Data Preparation
Download the Penn-Fudan pedestrian dataset:
```bash
./scripts/download_data.sh
```
This will download and extract the dataset to the `data/PennFudanPed` directory.
## Configuration
The project uses Python dictionaries for configuration:
- `configs/base_config.py`: Default configuration values
- `configs/pennfudan_maskrcnn_config.py`: Configuration for training on Penn-Fudan
- `configs/debug_config.py`: Configuration for quick testing (CPU, minimal training)
Key configuration parameters:
- `data_root`: Path to dataset
- `output_dir`: Directory for outputs
- `device`: Computing device ('cuda' or 'cpu')
- `batch_size`: Batch size for training
- `num_epochs`: Number of training epochs
- `lr`, `momentum`, `weight_decay`: Optimizer parameters
## Training
Run the training script with a configuration file:
```bash
python train.py --config configs/pennfudan_maskrcnn_config.py
```
For quick debugging on CPU:
```bash
python train.py --config configs/debug_config.py
```
To resume training from the latest checkpoint:
```bash
python train.py --config configs/pennfudan_maskrcnn_config.py --resume
```
Training outputs (logs, checkpoints) are saved to `outputs/<config_name>/`.
## Evaluation
Evaluate a trained model:
```bash
python test.py --config configs/pennfudan_maskrcnn_config.py --checkpoint outputs/pennfudan_maskrcnn_v1/checkpoints/checkpoint_epoch_10.pth
```
This runs the model on the test dataset and reports metrics.
## Visualization
Visualize model predictions on images:
```bash
python scripts/visualize_predictions.py --config configs/pennfudan_maskrcnn_config.py --checkpoint outputs/pennfudan_maskrcnn_v1/checkpoints/checkpoint_epoch_10.pth --index 0 --output prediction.png
```
Parameters:
- `--config`: Configuration file path
- `--checkpoint`: Model checkpoint path
- `--index`: Image index in dataset (default: 0)
- `--threshold`: Detection confidence threshold (default: 0.5)
- `--output`: Output image path (optional, displays interactively if not specified)
## Testing
Run all tests:
```bash
python -m pytest
```
Run specific test file:
```bash
python -m pytest tests/test_data_utils.py
```
Run tests with verbosity:
```bash
python -m pytest -v
```
## Debugging
For quick model testing without full training:
```bash
python scripts/test_model.py
```
This verifies:
- Model creation
- Forward pass
- Backward pass
- Dataset loading
For training with minimal resources:
```bash
python train.py --config configs/debug_config.py
```
This uses:
- CPU computation
- Minimal epochs (1)
- Small batch size (1)
- No multiprocessing
## Code Quality
Format code:
```bash
ruff format .
```
Run linter:
```bash
ruff check .
```
Fix auto-fixable issues:
```bash
ruff check --fix .
```
Run pre-commit checks:
```bash
pre-commit run --all-files
```

25
configs/debug_config.py Normal file
View File

@@ -0,0 +1,25 @@
from configs.base_config import base_config
# Create a debug configuration with minimal settings
config = base_config.copy()
# Update settings for quick debugging
config.update(
{
# Core configuration
"config_name": "debug_run",
"data_root": "data/PennFudanPed",
"num_classes": 2, # background + pedestrian
# Minimal training parameters
"batch_size": 1,
"num_epochs": 1, # Just one epoch for testing
"val_split_ratio": 0.2, # Use more validation samples for better testing coverage
# Performance optimizations
"pin_memory": False,
"num_workers": 0, # Use 0 workers to avoid multiprocessing complexities during debugging
# Logging settings
"log_freq": 1, # Log every batch for debugging
# Device setting - use CPU for reliable debugging
"device": "cpu", # Using CPU ensures consistent behavior across systems
}
)

View File

@@ -1,33 +1,34 @@
"""
Configuration for training Mask R-CNN on the Penn-Fudan dataset.
Configuration for MaskRCNN training on the PennFudan Dataset.
"""
from configs.base_config import base_config
# Create a copy of the base configuration
config = base_config.copy()
# Update specific values for this experiment
config.update(
{
# Core configuration
"config_name": "pennfudan_maskrcnn_v1",
"data_root": "data/PennFudanPed",
"num_classes": 2, # background + pedestrian
# Training parameters - modified for memory constraints
"batch_size": 1, # Reduced from 2 to 1 to save memory
"num_epochs": 10,
# Optimizer settings
"lr": 0.002, # Slightly reduced learning rate for smaller batch size
"momentum": 0.9,
"weight_decay": 0.0005,
# Memory optimization settings
"pin_memory": False, # Set to False to reduce memory pressure
"num_workers": 2, # Reduced from 4 to 2
# Device settings
"device": "cuda",
}
)
config = {
# Data settings
"data_root": "data/PennFudanPed",
"output_dir": "outputs",
# Hardware settings
"device": "cuda", # "cuda" or "cpu"
# Model settings
"num_classes": 2, # Background + person
# Training settings
"batch_size": 1, # Reduced from 2 to 1 to save memory
"num_epochs": 10,
"seed": 42,
# Optimizer settings
"lr": 0.002,
"momentum": 0.9,
"weight_decay": 0.0005,
"lr_step_size": 3,
"lr_gamma": 0.1,
# Logging and checkpoints
"log_freq": 10, # Log every N steps
"checkpoint_freq": 1, # Save checkpoint every N epochs
# Run identification
"config_name": "pennfudan_maskrcnn_v1",
# DataLoader settings
"pin_memory": False, # Set to False to reduce memory usage
"num_workers": 2, # Reduced from 4 to 2 to reduce memory pressure
}
# Ensure derived paths or settings are consistent if needed
# (Not strictly necessary with this simple structure)

View File

@@ -5,6 +5,7 @@ description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"matplotlib>=3.10.1",
"numpy>=2.2.4",
"pillow>=11.1.0",
"pytest>=8.3.5",

152
scripts/test_model.py Executable file
View File

@@ -0,0 +1,152 @@
#!/usr/bin/env python
"""
Quick model testing script to verify model creation and inference.
"""
import os
import sys
import time
import torch
# Add project root to the path to enable imports
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
from models.detection import get_maskrcnn_model
from utils.data_utils import PennFudanDataset, get_transform
def test_model_creation():
"""Test that we can create the model."""
print("Testing model creation...")
model = get_maskrcnn_model(
num_classes=2, pretrained=False, pretrained_backbone=False
)
print("✓ Model created successfully")
return model
def test_model_forward(model, device):
"""Test model forward pass with random inputs."""
print("\nTesting model forward pass...")
# Create a random batch
image = torch.rand(3, 300, 400, device=device) # Random image
# Create a random target
target = {
"boxes": torch.tensor(
[[100, 100, 200, 200]], dtype=torch.float32, device=device
),
"labels": torch.tensor([1], dtype=torch.int64, device=device),
"masks": torch.randint(0, 2, (1, 300, 400), dtype=torch.uint8, device=device),
"image_id": torch.tensor([0], device=device),
"area": torch.tensor([10000.0], dtype=torch.float32, device=device),
"iscrowd": torch.tensor([0], dtype=torch.uint8, device=device),
}
# Test inference mode (no targets)
model.eval()
with torch.no_grad():
start_time = time.time()
output_inference = model([image])
inference_time = time.time() - start_time
# Verify inference output
print(f"✓ Inference mode output: {type(output_inference)}")
print(f"✓ Inference time: {inference_time:.3f}s")
print(f"✓ Detection boxes shape: {output_inference[0]['boxes'].shape}")
print(f"✓ Detection scores shape: {output_inference[0]['scores'].shape}")
# Test training mode (with targets)
model.train()
start_time = time.time()
output_train = model([image], [target])
train_time = time.time() - start_time
# Verify training output
print(f"✓ Training mode output: {type(output_train)}")
print(f"✓ Training time: {train_time:.3f}s")
# Print loss values
for loss_name, loss_value in output_train.items():
print(f"{loss_name}: {loss_value.item():.4f}")
return output_train
def test_model_backward(model, loss_dict, device):
"""Test model backward pass."""
print("\nTesting model backward pass...")
# Calculate total loss
total_loss = sum(loss for loss in loss_dict.values())
# Create optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
# Backward pass
start_time = time.time()
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
backward_time = time.time() - start_time
print("✓ Backward pass and optimization completed")
print(f"✓ Backward time: {backward_time:.3f}s")
# Check that gradients were calculated
has_gradients = any(
param.grad is not None for param in model.parameters() if param.requires_grad
)
print(f"✓ Model has gradients: {has_gradients}")
def test_dataset():
"""Test that we can load the dataset."""
print("\nTesting dataset loading...")
data_root = "data/PennFudanPed"
if not os.path.exists(data_root):
print("✗ Dataset not found at", data_root)
return None
# Create dataset
dataset = PennFudanDataset(root=data_root, transforms=get_transform(train=True))
print(f"✓ Dataset loaded with {len(dataset)} samples")
# Test loading a sample
start_time = time.time()
img, target = dataset[0]
load_time = time.time() - start_time
print(f"✓ Sample loaded in {load_time:.3f}s")
print(f"✓ Image shape: {img.shape}")
print(f"✓ Target boxes shape: {target['boxes'].shape}")
return dataset
def main():
"""Run all tests."""
print("=== Quick Model Testing Script ===")
# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
# Run tests
model = test_model_creation()
model.to(device)
loss_dict = test_model_forward(model, device)
test_model_backward(model, loss_dict, device)
test_dataset()
print("\n=== All tests completed successfully ===")
if __name__ == "__main__":
main()

175
scripts/visualize_predictions.py Executable file
View File

@@ -0,0 +1,175 @@
#!/usr/bin/env python
"""
Visualization script for model predictions on the Penn-Fudan dataset.
This helps visualize and debug model predictions.
"""
import argparse
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import torch
# Add project root to path for imports
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
from models.detection import get_maskrcnn_model
from utils.common import load_checkpoint, load_config
from utils.data_utils import PennFudanDataset, get_transform
def visualize_prediction(image, prediction, threshold=0.5):
"""
Visualize model prediction on an image.
Args:
image (torch.Tensor): The input image [C, H, W]
prediction (dict): Model prediction dict with boxes, scores, labels, masks
threshold (float): Score threshold for visualization
Returns:
plt.Figure: The matplotlib figure with the visualization
"""
# Convert image from tensor to numpy
img_np = image.permute(1, 2, 0).cpu().numpy()
# Denormalize if needed
if img_np.max() <= 1.0:
img_np = (img_np * 255).astype(np.uint8)
# Create figure and axes
fig, ax = plt.subplots(1, 1, figsize=(12, 9))
ax.imshow(img_np)
ax.set_title("Model Predictions")
# Get predictions
boxes = prediction["boxes"].cpu().numpy()
scores = prediction["scores"].cpu().numpy()
labels = prediction["labels"].cpu().numpy()
masks = prediction["masks"].cpu().numpy()
# Filter by threshold
mask = scores >= threshold
boxes = boxes[mask]
scores = scores[mask]
labels = labels[mask]
masks = masks[mask]
# Draw predictions
for box, score, label, mask in zip(boxes, scores, labels, masks):
# Draw box
x1, y1, x2, y2 = box
rect = plt.Rectangle(
(x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="red", linewidth=2
)
ax.add_patch(rect)
# Add label and score
ax.text(
x1, y1, f"Person: {score:.2f}", bbox=dict(facecolor="yellow", alpha=0.5)
)
# Draw mask (with transparency)
mask = mask[0] > 0.5 # Threshold mask
mask_color = np.zeros((mask.shape[0], mask.shape[1], 3), dtype=np.uint8)
mask_color[mask] = [255, 0, 0] # Red color
ax.imshow(mask_color, alpha=0.3)
# Show count of detections
ax.set_xlabel(f"Found {len(boxes)} pedestrians with confidence >= {threshold}")
plt.tight_layout()
return fig
def run_inference(model, dataset, device, idx=0):
"""
Run inference on a single image from the dataset.
Args:
model (torch.nn.Module): The model
dataset (PennFudanDataset): The dataset
device (torch.device): The device
idx (int): Index of the image to test
Returns:
tuple: (image, prediction)
"""
# Get image
image, _ = dataset[idx]
# Prepare for model
image = image.to(device)
# Run inference
model.eval()
with torch.no_grad():
prediction = model([image])[0]
return image, prediction
def main():
"""Main entry point."""
parser = argparse.ArgumentParser(description="Visualize model predictions")
parser.add_argument("--config", required=True, help="Path to config file")
parser.add_argument("--checkpoint", required=True, help="Path to checkpoint file")
parser.add_argument("--index", type=int, default=0, help="Image index to visualize")
parser.add_argument("--threshold", type=float, default=0.5, help="Score threshold")
parser.add_argument("--output", help="Path to save visualization image")
args = parser.parse_args()
# Load config
config = load_config(args.config)
# Setup device
device = torch.device(config.get("device", "cpu"))
print(f"Using device: {device}")
# Create model
model = get_maskrcnn_model(
num_classes=config.get("num_classes", 2),
pretrained=False,
pretrained_backbone=False,
)
# Load checkpoint
checkpoint, _ = load_checkpoint(args.checkpoint, model, device)
model.to(device)
print(f"Loaded checkpoint from: {args.checkpoint}")
# Create dataset
data_root = config.get("data_root", "data/PennFudanPed")
if not os.path.exists(data_root):
print(f"Error: Data not found at {data_root}")
return
dataset = PennFudanDataset(root=data_root, transforms=get_transform(train=False))
print(f"Dataset loaded with {len(dataset)} images")
# Validate index
if args.index < 0 or args.index >= len(dataset):
print(f"Error: Index {args.index} out of range (0-{len(dataset)-1})")
return
# Run inference
print(f"Running inference on image {args.index}...")
image, prediction = run_inference(model, dataset, device, args.index)
# Visualize prediction
print("Visualizing predictions...")
fig = visualize_prediction(image, prediction, threshold=args.threshold)
# Save or show
if args.output:
fig.savefig(args.output)
print(f"Visualization saved to: {args.output}")
else:
plt.show()
print("Visualization displayed. Close window to continue.")
if __name__ == "__main__":
main()

25
test.py
View File

@@ -31,6 +31,8 @@ def main(args):
logging.info(f"Loaded configuration from: {args.config}")
logging.info(f"Checkpoint path: {args.checkpoint}")
logging.info(f"Loaded configuration dictionary: {config}")
if args.max_samples:
logging.info(f"Limiting evaluation to {args.max_samples} samples")
# Validate data path
data_root = config.get("data_root")
@@ -86,12 +88,15 @@ def main(args):
# Run Evaluation
try:
logging.info("Starting model evaluation...")
eval_metrics = evaluate(model, data_loader_test, device)
eval_metrics = evaluate(model, data_loader_test, device, args.max_samples)
# Log detailed metrics
logging.info("--- Evaluation Results ---")
for metric_name, metric_value in eval_metrics.items():
logging.info(f" {metric_name}: {metric_value:.4f}")
if isinstance(metric_value, (int, float)):
logging.info(f" {metric_name}: {metric_value:.4f}")
else:
logging.info(f" {metric_name}: {metric_value}")
logging.info("Evaluation completed successfully")
except Exception as e:
@@ -100,10 +105,20 @@ def main(args):
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Test a trained Mask R-CNN model")
parser.add_argument("--config", required=True, help="Path to configuration file")
parser = argparse.ArgumentParser(
description="Test script for torchvision Mask R-CNN"
)
parser.add_argument(
"--checkpoint", required=True, help="Path to model checkpoint file (.pth)"
"--config", required=True, type=str, help="Path to configuration file"
)
parser.add_argument(
"--checkpoint", required=True, type=str, help="Path to model checkpoint"
)
parser.add_argument(
"--max_samples",
type=int,
default=None,
help="Maximum number of samples to evaluate",
)
args = parser.parse_args()
main(args)

BIN
test_prediction.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 944 KiB

View File

@@ -0,0 +1,56 @@
import os
import sys
from pathlib import Path
import pytest
import torch
# Add project root to the path to enable imports
project_root = Path(__file__).parent.parent
sys.path.insert(0, str(project_root))
from models.detection import get_maskrcnn_model # noqa: E402
from utils.data_utils import PennFudanDataset, get_transform # noqa: E402
@pytest.fixture
def device():
"""Return CPU device for consistent testing."""
return torch.device("cpu")
@pytest.fixture
def test_config():
"""Return a minimal config dictionary for testing."""
return {
"data_root": "data/PennFudanPed",
"num_classes": 2,
"batch_size": 1,
"device": "cpu",
"output_dir": "test_outputs",
"config_name": "test_run",
}
@pytest.fixture
def small_model(device):
"""Return a small Mask R-CNN model for testing."""
model = get_maskrcnn_model(
num_classes=2, pretrained=False, pretrained_backbone=False
)
model.to(device)
return model
@pytest.fixture
def sample_dataset():
"""Return a small dataset for testing if available."""
data_root = "data/PennFudanPed"
# Skip if data is not available
if not os.path.exists(data_root):
pytest.skip("Test dataset not available")
transforms = get_transform(train=False)
dataset = PennFudanDataset(root=data_root, transforms=transforms)
return dataset

108
tests/test_data_utils.py Normal file
View File

@@ -0,0 +1,108 @@
import torch
from utils.data_utils import collate_fn, get_transform
def test_dataset_len(sample_dataset):
"""Test that the dataset has the expected length."""
# PennFudanPed has 170 images
assert len(sample_dataset) > 0, "Dataset should not be empty"
def test_dataset_getitem(sample_dataset):
"""Test that __getitem__ returns expected format."""
if len(sample_dataset) == 0:
return # Skip if no data
# Get first item
img, target = sample_dataset[0]
# Check image
assert isinstance(img, torch.Tensor), "Image should be a tensor"
assert img.dim() == 3, "Image should have 3 dimensions (C, H, W)"
assert img.shape[0] == 3, "Image should have 3 channels (RGB)"
# Check target
assert isinstance(target, dict), "Target should be a dictionary"
assert "boxes" in target, "Target should contain 'boxes'"
assert "labels" in target, "Target should contain 'labels'"
assert "masks" in target, "Target should contain 'masks'"
assert "image_id" in target, "Target should contain 'image_id'"
assert "area" in target, "Target should contain 'area'"
assert "iscrowd" in target, "Target should contain 'iscrowd'"
# Check target values
assert (
target["boxes"].shape[1] == 4
), "Boxes should have 4 coordinates (x1, y1, x2, y2)"
assert target["labels"].dim() == 1, "Labels should be a 1D tensor"
assert target["masks"].dim() == 3, "Masks should be a 3D tensor (N, H, W)"
def test_transforms(sample_dataset):
"""Test that transforms are applied correctly."""
if len(sample_dataset) == 0:
return # Skip if no data
# Get original transform
orig_transforms = sample_dataset.transforms
# Apply different transforms
train_transforms = get_transform(train=True)
eval_transforms = get_transform(train=False)
# Test that we can switch transforms
sample_dataset.transforms = train_transforms
img_train, target_train = sample_dataset[0]
sample_dataset.transforms = eval_transforms
img_eval, target_eval = sample_dataset[0]
# Restore original transforms
sample_dataset.transforms = orig_transforms
# Images should be tensors with expected properties
assert img_train.dim() == img_eval.dim() == 3
assert img_train.shape[0] == img_eval.shape[0] == 3
def test_collate_fn():
"""Test the collate function."""
# Create dummy batch data
dummy_img1 = torch.rand(3, 100, 100)
dummy_img2 = torch.rand(3, 100, 100)
dummy_target1 = {
"boxes": torch.tensor([[10, 10, 50, 50]], dtype=torch.float32),
"labels": torch.tensor([1], dtype=torch.int64),
"masks": torch.zeros(1, 100, 100, dtype=torch.uint8),
"image_id": torch.tensor([0]),
"area": torch.tensor([1600.0], dtype=torch.float32),
"iscrowd": torch.tensor([0], dtype=torch.uint8),
}
dummy_target2 = {
"boxes": torch.tensor([[20, 20, 60, 60]], dtype=torch.float32),
"labels": torch.tensor([1], dtype=torch.int64),
"masks": torch.zeros(1, 100, 100, dtype=torch.uint8),
"image_id": torch.tensor([1]),
"area": torch.tensor([1600.0], dtype=torch.float32),
"iscrowd": torch.tensor([0], dtype=torch.uint8),
}
batch = [(dummy_img1, dummy_target1), (dummy_img2, dummy_target2)]
# Apply collate_fn
images, targets = collate_fn(batch)
# Check results
assert len(images) == 2, "Should have 2 images"
assert len(targets) == 2, "Should have 2 targets"
assert torch.allclose(images[0], dummy_img1), "First image should match"
assert torch.allclose(images[1], dummy_img2), "Second image should match"
assert torch.allclose(
targets[0]["boxes"], dummy_target1["boxes"]
), "First boxes should match"
assert torch.allclose(
targets[1]["boxes"], dummy_target2["boxes"]
), "Second boxes should match"

102
tests/test_model.py Normal file
View File

@@ -0,0 +1,102 @@
import torch
import torchvision
from utils.eval_utils import evaluate
def test_model_creation(small_model):
"""Test that the model is created correctly."""
assert isinstance(small_model, torchvision.models.detection.MaskRCNN)
assert small_model.roi_heads.box_predictor.cls_score.out_features == 2
assert small_model.roi_heads.mask_predictor.mask_fcn_logits.out_channels == 2
def test_model_forward_train_mode(small_model, sample_dataset, device):
"""Test model forward pass in training mode."""
if len(sample_dataset) == 0:
return # Skip if no data
# Set model to training mode
small_model.train()
# Get a batch
img, target = sample_dataset[0]
img = img.to(device)
target = {k: v.to(device) for k, v in target.items()}
# Forward pass with targets should return loss dict in training mode
loss_dict = small_model([img], [target])
# Verify loss dict structure
assert isinstance(loss_dict, dict), "Loss should be a dictionary"
assert "loss_classifier" in loss_dict, "Should have classifier loss"
assert "loss_box_reg" in loss_dict, "Should have box regression loss"
assert "loss_mask" in loss_dict, "Should have mask loss"
assert "loss_objectness" in loss_dict, "Should have objectness loss"
assert "loss_rpn_box_reg" in loss_dict, "Should have RPN box regression loss"
# Verify loss values
for loss_name, loss_value in loss_dict.items():
assert isinstance(loss_value, torch.Tensor), f"{loss_name} should be a tensor"
assert loss_value.dim() == 0, f"{loss_name} should be a scalar tensor"
assert not torch.isnan(loss_value), f"{loss_name} should not be NaN"
assert not torch.isinf(loss_value), f"{loss_name} should not be infinite"
def test_model_forward_eval_mode(small_model, sample_dataset, device):
"""Test model forward pass in evaluation mode."""
if len(sample_dataset) == 0:
return # Skip if no data
# Set model to evaluation mode
small_model.eval()
# Get a batch
img, target = sample_dataset[0]
img = img.to(device)
# Forward pass without targets should return predictions in eval mode
with torch.no_grad():
predictions = small_model([img])
# Verify predictions structure
assert isinstance(predictions, list), "Predictions should be a list"
assert len(predictions) == 1, "Should have predictions for 1 image"
pred = predictions[0]
assert "boxes" in pred, "Predictions should contain 'boxes'"
assert "scores" in pred, "Predictions should contain 'scores'"
assert "labels" in pred, "Predictions should contain 'labels'"
assert "masks" in pred, "Predictions should contain 'masks'"
def test_evaluate_function(small_model, sample_dataset, device):
"""Test the evaluate function."""
if len(sample_dataset) == 0:
return # Skip if no data
# Create a tiny dataloader for testing
from torch.utils.data import DataLoader
from utils.data_utils import collate_fn
# Use only 2 samples for quick testing
small_ds = torch.utils.data.Subset(
sample_dataset, range(min(2, len(sample_dataset)))
)
dataloader = DataLoader(
small_ds, batch_size=1, shuffle=False, collate_fn=collate_fn
)
# Set model to eval mode
small_model.eval()
# Import evaluate function
# Run evaluation
metrics = evaluate(small_model, dataloader, device)
# Check results
assert isinstance(metrics, dict), "Metrics should be a dictionary"
assert "average_loss" in metrics, "Metrics should contain 'average_loss'"
assert metrics["average_loss"] >= 0, "Loss should be non-negative"

View File

@@ -0,0 +1,77 @@
import os
import sys
import matplotlib.pyplot as plt
import torch
# Import visualization functions
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
from scripts.visualize_predictions import visualize_prediction # noqa: E402
def test_visualize_prediction():
"""Test that the visualization function works."""
# Create a dummy image tensor
image = torch.rand(3, 400, 600)
# Create a dummy prediction dictionary
prediction = {
"boxes": torch.tensor(
[[100, 100, 200, 200], [300, 300, 400, 400]], dtype=torch.float32
),
"scores": torch.tensor([0.9, 0.7], dtype=torch.float32),
"labels": torch.tensor([1, 1], dtype=torch.int64),
"masks": torch.zeros(2, 1, 400, 600, dtype=torch.float32),
}
# Set some pixels in the mask to 1
prediction["masks"][0, 0, 100:200, 100:200] = 1.0
prediction["masks"][1, 0, 300:400, 300:400] = 1.0
# Call the visualization function
fig = visualize_prediction(image, prediction, threshold=0.5)
# Check that a figure was returned
assert isinstance(fig, plt.Figure)
# Check figure properties
assert len(fig.axes) == 1
# Close the figure to avoid memory leaks
plt.close(fig)
def test_visualize_prediction_threshold():
"""Test that the threshold parameter filters predictions correctly."""
# Create a dummy image tensor
image = torch.rand(3, 400, 600)
# Create a dummy prediction dictionary with varying scores
prediction = {
"boxes": torch.tensor(
[[100, 100, 200, 200], [300, 300, 400, 400], [500, 100, 550, 150]],
dtype=torch.float32,
),
"scores": torch.tensor([0.9, 0.7, 0.3], dtype=torch.float32),
"labels": torch.tensor([1, 1, 1], dtype=torch.int64),
"masks": torch.zeros(3, 1, 400, 600, dtype=torch.float32),
}
# Call the visualization function with different thresholds
fig_low = visualize_prediction(image, prediction, threshold=0.2)
fig_med = visualize_prediction(image, prediction, threshold=0.5)
fig_high = visualize_prediction(image, prediction, threshold=0.8)
# Low threshold should show all 3 boxes
assert "Found 3" in fig_low.axes[0].get_xlabel()
# Medium threshold should show 2 boxes
assert "Found 2" in fig_med.axes[0].get_xlabel()
# High threshold should show 1 box
assert "Found 1" in fig_high.axes[0].get_xlabel()
# Close figures
plt.close(fig_low)
plt.close(fig_med)
plt.close(fig_high)

View File

@@ -310,7 +310,7 @@ def main(args):
"num_epochs", 10
) - 1:
checkpoint_file = os.path.join(
checkpoint_path, f"checkpoint_epoch_{epoch + 1}.pth"
checkpoint_path, f"checkpoint_epoch_{epoch+1}.pth"
)
checkpoint = {
"epoch": epoch + 1,

View File

@@ -18,80 +18,91 @@ class PennFudanDataset(torch.utils.data.Dataset):
self.masks = sorted(list(os.listdir(os.path.join(root, "PedMasks"))))
def __getitem__(self, idx):
# Load images and masks
"""Get a sample from the dataset.
Args:
idx (int): Index of the sample to retrieve.
Returns:
tuple: (image, target) where target is a dictionary containing various object annotations.
"""
# Load image
img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
# Use PIL to load images (more memory efficient)
img = Image.open(img_path).convert("RGB")
# Note: Masks are not converted to RGB, contains index values
mask = Image.open(mask_path)
# Convert mask to numpy array
# Convert mask PIL image to numpy array
mask = np.array(mask)
# Instances are encoded as different colors
# Find all object instances (each instance has a unique value in the mask)
# Value 0 is the background
obj_ids = np.unique(mask)
# First id is the background, so remove it
obj_ids = obj_ids[1:]
obj_ids = obj_ids[1:] # Remove background (id=0)
# Split the color-encoded mask into a set of binary masks
binary_masks = mask == obj_ids[:, None, None]
# Split the mask into binary masks for each object instance
masks = mask == obj_ids[:, None, None]
# Get bounding box coordinates for each mask
# Get bounding box for each mask
num_objs = len(obj_ids)
boxes = []
for i in range(num_objs):
pos = np.where(binary_masks[i])
pos = np.where(masks[i])
if len(pos[0]) == 0 or len(pos[1]) == 0: # Skip empty masks
continue
xmin = np.min(pos[1])
xmax = np.max(pos[1])
ymin = np.min(pos[0])
ymax = np.max(pos[0])
# Filter out potentially empty masks or masks with zero area
if xmax > xmin and ymax > ymin:
boxes.append([xmin, ymin, xmax, ymax])
else:
# If box is invalid, we might need to handle this
# For now, let's remove the corresponding mask as well
# This requires careful index handling if filtering occurs
# A safer approach might be to filter masks *after* box generation
# Let's recalculate binary_masks based on valid boxes later if needed
pass # placeholder for potential filtering logic
# Ensure boxes list isn't empty if filtering happened
if not boxes:
# Handle case with no valid boxes found - return dummy target? Or raise error?
# For now, let's create dummy tensors. This should be revisited.
print(
f"Warning: No valid boxes found for image {idx}. Returning dummy target."
)
boxes = torch.zeros((0, 4), dtype=torch.float32)
labels = torch.zeros((0,), dtype=torch.int64)
binary_masks = torch.zeros(
(0, mask.shape[0], mask.shape[1]), dtype=torch.uint8
)
image_id = torch.tensor([idx])
area = torch.zeros((0,), dtype=torch.float32)
iscrowd = torch.zeros((0,), dtype=torch.uint8)
else:
# Skip boxes with zero area
if xmax <= xmin or ymax <= ymin:
continue
boxes.append([xmin, ymin, xmax, ymax])
# Convert everything to tensors
if boxes:
boxes = torch.as_tensor(boxes, dtype=torch.float32)
# There is only one class (pedestrian)
labels = torch.ones((num_objs,), dtype=torch.int64)
binary_masks = torch.as_tensor(binary_masks, dtype=torch.uint8)
image_id = torch.tensor([idx])
# Calculate area
labels = torch.ones(
(len(boxes),), dtype=torch.int64
) # All objects are pedestrians (class 1)
masks = torch.as_tensor(masks, dtype=torch.uint8)
# Calculate area of each box
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
# Assume all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.uint8)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["masks"] = binary_masks
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
# All instances are not crowd
iscrowd = torch.zeros((len(boxes),), dtype=torch.uint8)
# Create the target dictionary
target = {
"boxes": boxes,
"labels": labels,
"masks": masks,
"image_id": torch.tensor([idx]),
"area": area,
"iscrowd": iscrowd,
}
else:
# Handle case with no valid objects (rare but possible)
target = {
"boxes": torch.zeros((0, 4), dtype=torch.float32),
"labels": torch.zeros((0,), dtype=torch.int64),
"masks": torch.zeros(
(0, mask.shape[0], mask.shape[1]), dtype=torch.uint8
),
"image_id": torch.tensor([idx]),
"area": torch.zeros((0,), dtype=torch.float32),
"iscrowd": torch.zeros((0,), dtype=torch.uint8),
}
# Apply transforms if provided
if self.transforms is not None:
# Apply transforms to both image and target
# Note: torchvision v2 transforms handle target dicts automatically
img, target = self.transforms(img, target)
return img, target
@@ -117,15 +128,18 @@ def get_transform(train):
# Convert to PyTorch tensor and normalize
transforms.append(T.ToImage())
# Add resize transform to reduce memory usage (max size of 800px)
transforms.append(T.Resize(800))
# Resize images to control memory usage
# Use a smaller size for training (more memory-intensive due to gradients)
if train:
transforms.append(T.Resize(700))
else:
transforms.append(T.Resize(800)) # Can use larger size for eval
transforms.append(T.ToDtype(torch.float32, scale=True))
# Data augmentation for training
if train:
transforms.append(T.RandomHorizontalFlip(0.5))
# Could add more augmentations here if desired
return T.Compose(transforms)

View File

@@ -1,36 +1,84 @@
import logging
import time
import numpy as np
import torch
from torchvision.ops import box_iou
def evaluate(model, data_loader, device):
def evaluate(model, data_loader, device, max_samples=None):
"""Performs evaluation on the dataset for one epoch.
Args:
model (torch.nn.Module): The model to evaluate.
data_loader (torch.utils.data.DataLoader): DataLoader for the evaluation data.
device (torch.device): The device to run evaluation on.
max_samples (int, optional): Maximum number of batches to evaluate. If None, evaluate all.
Returns:
dict: A dictionary containing evaluation metrics (e.g., average loss).
dict: A dictionary containing evaluation metrics (e.g., average loss, mAP).
"""
model.eval() # Set model to evaluation mode
total_loss = 0.0
num_batches = len(data_loader)
# Limit evaluation samples if specified
if max_samples is not None:
num_batches = min(num_batches, max_samples)
logging.info(f"Limiting evaluation to {num_batches} batches")
eval_start_time = time.time()
status_interval = max(1, num_batches // 10) # Log status roughly 10 times
# Initialize metrics collection
inference_times = []
# IoU thresholds for mAP calculation
iou_thresholds = [0.5, 0.75, 0.5] # 0.5, 0.75, 0.5:0.95 (COCO standard)
confidence_thresholds = [0.5, 0.75, 0.9] # Different confidence thresholds
# Initialize counters for metrics
metric_accumulators = initialize_metric_accumulators(
iou_thresholds, confidence_thresholds
)
logging.info("--- Starting Evaluation --- ")
with torch.no_grad(): # Disable gradient calculations
for i, (images, targets) in enumerate(data_loader):
# Stop if we've reached the max samples
if max_samples is not None and i >= max_samples:
break
# Free cached memory
if torch.cuda.is_available():
torch.cuda.empty_cache()
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
# In eval mode with targets, Mask R-CNN should still return losses
# If it returned predictions, logic here would change to process predictions
# Measure inference time
start_time = time.time()
# Get predictions in eval mode
predictions = model(images)
inference_time = time.time() - start_time
inference_times.append(inference_time)
# Process metrics on-the-fly for this batch only
process_batch_metrics(
predictions,
targets,
metric_accumulators,
iou_thresholds,
confidence_thresholds,
)
# Compute losses (switch to train mode temporarily)
model.train()
loss_dict = model(images, targets)
model.eval()
# Calculate total loss
losses = sum(loss for loss in loss_dict.values())
loss_value = losses.item()
total_loss += loss_value
@@ -38,18 +86,727 @@ def evaluate(model, data_loader, device):
if (i + 1) % status_interval == 0:
logging.info(f" Evaluated batch {i + 1}/{num_batches}")
# Explicitly clean up to help with memory
del images, targets, predictions, loss_dict
if torch.cuda.is_available():
torch.cuda.empty_cache()
# Calculate basic metrics
avg_loss = total_loss / num_batches if num_batches > 0 else 0
avg_inference_time = np.mean(inference_times) if inference_times else 0
# Calculate final metrics from accumulators
metrics = {
"average_loss": avg_loss,
"average_inference_time": avg_inference_time,
}
# Compute final metrics from accumulators
metrics.update(finalize_metrics(metric_accumulators))
eval_duration = time.time() - eval_start_time
# Log results
logging.info("--- Evaluation Finished ---")
logging.info(f" Average Evaluation Loss: {avg_loss:.4f}")
logging.info(f" Average Inference Time: {avg_inference_time:.4f}s per batch")
# Log detailed metrics
for metric_name, metric_value in metrics.items():
if metric_name != "average_loss": # Already logged
if isinstance(metric_value, (int, float)):
logging.info(f" {metric_name}: {metric_value:.4f}")
else:
logging.info(f" {metric_name}: {metric_value}")
logging.info(f" Evaluation Duration: {eval_duration:.2f}s")
# Return metrics (currently just average loss)
metrics = {"average_loss": avg_loss}
return metrics
def initialize_metric_accumulators(iou_thresholds, confidence_thresholds):
"""Initialize accumulators for incremental metric calculation"""
accumulators = {
"total_gt": 0,
"map_accumulators": {},
"conf_accumulators": {},
"size_accumulators": {
"small_gt": 0,
"medium_gt": 0,
"large_gt": 0,
"small_tp": 0,
"medium_tp": 0,
"large_tp": 0,
"small_det": 0,
"medium_det": 0,
"large_det": 0,
},
}
# Initialize map accumulators for each IoU threshold
for iou in iou_thresholds:
accumulators["map_accumulators"][iou] = {
"true_positives": 0,
"false_positives": 0,
"total_detections": 0,
}
# Initialize confidence accumulators
for conf in confidence_thresholds:
accumulators["conf_accumulators"][conf] = {
"true_positives": 0,
"detections": 0,
}
return accumulators
def process_batch_metrics(
predictions, targets, accumulators, iou_thresholds, confidence_thresholds
):
"""Process metrics for a single batch incrementally"""
small_threshold = 32 * 32 # Small objects: area < 32²
medium_threshold = 96 * 96 # Medium objects: 32² <= area < 96²
# Count total ground truth boxes in this batch
batch_gt = sum(len(target["boxes"]) for target in targets)
accumulators["total_gt"] += batch_gt
# Process all predictions in the batch
for pred, target in zip(predictions, targets):
pred_boxes = pred["boxes"]
pred_scores = pred["scores"]
pred_labels = pred["labels"]
gt_boxes = target["boxes"]
gt_labels = target["labels"]
# Skip if no predictions or no ground truth
if len(pred_boxes) == 0 or len(gt_boxes) == 0:
continue
# Calculate IoU between predictions and ground truth
iou_matrix = box_iou(pred_boxes, gt_boxes)
# Process size-based metrics
gt_areas = target.get("area", None)
if gt_areas is None:
# Calculate if not provided
gt_areas = (gt_boxes[:, 2] - gt_boxes[:, 0]) * (
gt_boxes[:, 3] - gt_boxes[:, 1]
)
# Count ground truth by size
small_mask_gt = gt_areas < small_threshold
medium_mask_gt = (gt_areas >= small_threshold) & (gt_areas < medium_threshold)
large_mask_gt = gt_areas >= medium_threshold
accumulators["size_accumulators"]["small_gt"] += torch.sum(small_mask_gt).item()
accumulators["size_accumulators"]["medium_gt"] += torch.sum(
medium_mask_gt
).item()
accumulators["size_accumulators"]["large_gt"] += torch.sum(large_mask_gt).item()
# Calculate areas for predictions
pred_areas = (pred_boxes[:, 2] - pred_boxes[:, 0]) * (
pred_boxes[:, 3] - pred_boxes[:, 1]
)
# Count predictions by size (with confidence >= 0.5)
conf_mask = pred_scores >= 0.5
if torch.sum(conf_mask) == 0:
continue # Skip if no predictions meet confidence threshold
small_mask = (pred_areas < small_threshold) & conf_mask
medium_mask = (
(pred_areas >= small_threshold)
& (pred_areas < medium_threshold)
& conf_mask
)
large_mask = (pred_areas >= medium_threshold) & conf_mask
accumulators["size_accumulators"]["small_det"] += torch.sum(small_mask).item()
accumulators["size_accumulators"]["medium_det"] += torch.sum(medium_mask).item()
accumulators["size_accumulators"]["large_det"] += torch.sum(large_mask).item()
# Process metrics for each IoU threshold
for iou_threshold in iou_thresholds:
process_iou_metrics(
pred_boxes,
pred_scores,
pred_labels,
gt_boxes,
gt_labels,
iou_matrix,
accumulators["map_accumulators"][iou_threshold],
iou_threshold,
)
# Process metrics for each confidence threshold
for conf_threshold in confidence_thresholds:
process_confidence_metrics(
pred_boxes,
pred_scores,
pred_labels,
gt_boxes,
gt_labels,
iou_matrix,
accumulators["conf_accumulators"][conf_threshold],
conf_threshold,
)
# Process size-based true positives with fixed IoU threshold of 0.5
# Use a new gt_matched array to avoid interference with other metric calculations
gt_matched = torch.zeros(len(gt_boxes), dtype=torch.bool)
filtered_mask = pred_scores >= 0.5
if torch.sum(filtered_mask) > 0:
filtered_boxes = pred_boxes[filtered_mask]
filtered_scores = pred_scores[filtered_mask]
filtered_labels = pred_labels[filtered_mask]
# Recalculate IoU for filtered boxes
filtered_iou_matrix = box_iou(filtered_boxes, gt_boxes)
# Sort predictions by confidence
sorted_indices = torch.argsort(filtered_scores, descending=True)
for idx in sorted_indices:
best_iou, best_gt_idx = torch.max(filtered_iou_matrix[idx], dim=0)
if best_iou >= 0.5 and not gt_matched[best_gt_idx]:
if filtered_labels[idx] == gt_labels[best_gt_idx]:
gt_matched[best_gt_idx] = True
# Categorize true positive by ground truth size (not prediction size)
area = gt_areas[best_gt_idx].item()
if area < small_threshold:
accumulators["size_accumulators"]["small_tp"] += 1
elif area < medium_threshold:
accumulators["size_accumulators"]["medium_tp"] += 1
else:
accumulators["size_accumulators"]["large_tp"] += 1
def process_iou_metrics(
pred_boxes,
pred_scores,
pred_labels,
gt_boxes,
gt_labels,
iou_matrix,
accumulator,
iou_threshold,
):
"""Process metrics for a specific IoU threshold"""
# Apply a minimum confidence threshold of 0.05 for metrics
min_conf_threshold = 0.05
conf_mask = pred_scores >= min_conf_threshold
if torch.sum(conf_mask) == 0:
return # Skip if no predictions after confidence filtering
# Filter predictions by confidence
filtered_boxes = pred_boxes[conf_mask]
filtered_scores = pred_scores[conf_mask]
filtered_labels = pred_labels[conf_mask]
# Initialize array to track which gt boxes have been matched
gt_matched = torch.zeros(len(gt_boxes), dtype=torch.bool)
# We may need a filtered IoU matrix if we're filtering predictions
if len(filtered_boxes) < len(pred_boxes):
# Recalculate IoU for filtered predictions
filtered_iou_matrix = box_iou(filtered_boxes, gt_boxes)
else:
filtered_iou_matrix = iou_matrix
# Sort predictions by confidence score (high to low)
sorted_indices = torch.argsort(filtered_scores, descending=True)
# True positives count for this batch
batch_tp = 0
for idx in sorted_indices:
# Find best matching ground truth box
iou_values = filtered_iou_matrix[idx]
# Skip if no ground truth boxes
if len(iou_values) == 0:
# This is a false positive since there's no ground truth to match
accumulator["false_positives"] += 1
continue
best_iou, best_gt_idx = torch.max(iou_values, dim=0)
# Check if the prediction matches a ground truth box
if (
best_iou >= iou_threshold
and not gt_matched[best_gt_idx]
and filtered_labels[idx] == gt_labels[best_gt_idx]
):
batch_tp += 1
gt_matched[best_gt_idx] = True
else:
accumulator["false_positives"] += 1
# Update true positives - Important: Don't artificially cap true positives here
# Let finalize_metrics handle the capping to avoid recall underestimation during intermediate calculations
accumulator["true_positives"] += batch_tp
# Count total detection (after confidence filtering)
accumulator["total_detections"] += len(filtered_boxes)
def process_confidence_metrics(
pred_boxes,
pred_scores,
pred_labels,
gt_boxes,
gt_labels,
iou_matrix,
accumulator,
conf_threshold,
):
"""Process metrics for a specific confidence threshold"""
# Filter by confidence
mask = pred_scores >= conf_threshold
if torch.sum(mask) == 0:
return # Skip if no predictions after filtering
filtered_boxes = pred_boxes[mask]
filtered_scores = pred_scores[mask]
filtered_labels = pred_labels[mask]
accumulator["detections"] += len(filtered_boxes)
if len(filtered_boxes) == 0 or len(gt_boxes) == 0:
return
# Calculate matches with fixed IoU threshold of 0.5
gt_matched = torch.zeros(len(gt_boxes), dtype=torch.bool)
# We need to recalculate IoU for the filtered boxes
filtered_iou_matrix = box_iou(filtered_boxes, gt_boxes)
# Sort by confidence for consistent ordering
sorted_indices = torch.argsort(filtered_scores, descending=True)
for pred_idx in sorted_indices:
best_iou, best_gt_idx = torch.max(filtered_iou_matrix[pred_idx], dim=0)
if best_iou >= 0.5 and not gt_matched[best_gt_idx]:
if filtered_labels[pred_idx] == gt_labels[best_gt_idx]:
accumulator["true_positives"] += 1
gt_matched[best_gt_idx] = True
def finalize_metrics(accumulators):
"""Calculate final metrics from accumulators"""
metrics = {}
total_gt = accumulators["total_gt"]
# Calculate mAP metrics
for iou_threshold, map_acc in accumulators["map_accumulators"].items():
true_positives = map_acc["true_positives"]
false_positives = map_acc["false_positives"]
# Calculate metrics - Only cap true positives at the very end for final metrics
# to prevent recall underestimation during intermediate calculations
precision = true_positives / max(true_positives + false_positives, 1)
recall = true_positives / max(total_gt, 1)
# Cap metrics for final reporting to ensure they're in valid range
precision = min(1.0, precision)
recall = min(1.0, recall)
f1_score = 2 * precision * recall / max(precision + recall, 1e-6)
# Simple average precision calculation (precision * recall)
# This is a simplification; full AP calculation requires a PR curve
ap = precision * recall
metrics.update(
{
f"mAP@{iou_threshold}": ap,
f"precision@{iou_threshold}": precision,
f"recall@{iou_threshold}": recall,
f"f1_score@{iou_threshold}": f1_score,
f"tp@{iou_threshold}": true_positives,
f"fp@{iou_threshold}": false_positives,
"gt_total": total_gt,
}
)
# Calculate confidence threshold metrics
for conf_threshold, conf_acc in accumulators["conf_accumulators"].items():
true_positives = conf_acc["true_positives"]
detections = conf_acc["detections"]
# Calculate metrics without artificial capping to prevent recall underestimation
precision = true_positives / max(detections, 1)
recall = true_positives / max(total_gt, 1)
# Cap metrics for final reporting only
precision = min(1.0, precision)
recall = min(1.0, recall)
f1_score = 2 * precision * recall / max(precision + recall, 1e-6)
metrics.update(
{
f"precision@conf{conf_threshold}": precision,
f"recall@conf{conf_threshold}": recall,
f"f1_score@conf{conf_threshold}": f1_score,
f"detections@conf{conf_threshold}": detections,
f"tp@conf{conf_threshold}": true_positives,
}
)
# Calculate size metrics
size_acc = accumulators["size_accumulators"]
small_gt = size_acc["small_gt"]
medium_gt = size_acc["medium_gt"]
large_gt = size_acc["large_gt"]
small_tp = size_acc["small_tp"]
medium_tp = size_acc["medium_tp"]
large_tp = size_acc["large_tp"]
small_det = size_acc["small_det"]
medium_det = size_acc["medium_det"]
large_det = size_acc["large_det"]
# Calculate precision and recall without artificial capping
small_precision = small_tp / max(small_det, 1) if small_det > 0 else 0
small_recall = small_tp / max(small_gt, 1) if small_gt > 0 else 0
medium_precision = medium_tp / max(medium_det, 1) if medium_det > 0 else 0
medium_recall = medium_tp / max(medium_gt, 1) if medium_gt > 0 else 0
large_precision = large_tp / max(large_det, 1) if large_det > 0 else 0
large_recall = large_tp / max(large_gt, 1) if large_gt > 0 else 0
# Cap metrics for final reporting
small_precision = min(1.0, small_precision)
small_recall = min(1.0, small_recall)
medium_precision = min(1.0, medium_precision)
medium_recall = min(1.0, medium_recall)
large_precision = min(1.0, large_precision)
large_recall = min(1.0, large_recall)
metrics.update(
{
"small_precision": small_precision,
"small_recall": small_recall,
"small_count": small_gt,
"small_tp": small_tp,
"small_det": small_det,
"medium_precision": medium_precision,
"medium_recall": medium_recall,
"medium_count": medium_gt,
"medium_tp": medium_tp,
"medium_det": medium_det,
"large_precision": large_precision,
"large_recall": large_recall,
"large_count": large_gt,
"large_tp": large_tp,
"large_det": large_det,
}
)
return metrics
def calculate_map(predictions, targets, iou_threshold=0.5):
"""
Calculate mean Average Precision (mAP) at a specific IoU threshold.
Args:
predictions (list): List of prediction dictionaries
targets (list): List of target dictionaries
iou_threshold (float): IoU threshold for considering a detection as correct
Returns:
dict: Dictionary with mAP, precision, recall and F1 score
"""
# Initialize counters
total_gt = 0
total_detections = 0
true_positives = 0
false_positives = 0
# Count total ground truth boxes
for target in targets:
total_gt += len(target["boxes"])
# Process all predictions
for pred, target in zip(predictions, targets):
pred_boxes = pred["boxes"]
pred_scores = pred["scores"]
pred_labels = pred["labels"]
gt_boxes = target["boxes"]
gt_labels = target["labels"]
# Skip if no predictions or no ground truth
if len(pred_boxes) == 0 or len(gt_boxes) == 0:
continue
# Calculate IoU between predictions and ground truth
iou_matrix = box_iou(pred_boxes, gt_boxes)
# Initialize array to track which gt boxes have been matched
gt_matched = torch.zeros(len(gt_boxes), dtype=torch.bool)
# Sort predictions by confidence score (high to low)
sorted_indices = torch.argsort(pred_scores, descending=True)
# Count true positives and false positives
for idx in sorted_indices:
# Find best matching ground truth box
iou_values = iou_matrix[idx]
best_iou, best_gt_idx = torch.max(iou_values, dim=0)
# Check if the prediction matches a ground truth box
if (
best_iou >= iou_threshold
and not gt_matched[best_gt_idx]
and pred_labels[idx] == gt_labels[best_gt_idx]
):
true_positives += 1
gt_matched[best_gt_idx] = True
else:
false_positives += 1
total_detections += len(pred_boxes)
# Calculate metrics
precision = true_positives / max(true_positives + false_positives, 1)
recall = true_positives / max(total_gt, 1)
# Cap metrics for final reporting
precision = min(1.0, precision)
recall = min(1.0, recall)
f1_score = 2 * precision * recall / max(precision + recall, 1e-6)
return {
"mAP": precision * recall, # Simplified mAP calculation
"precision": precision,
"recall": recall,
"f1_score": f1_score,
"true_positives": true_positives,
"false_positives": false_positives,
"total_gt": total_gt,
"total_detections": total_detections,
}
def calculate_metrics_at_confidence(predictions, targets, confidence_threshold=0.5):
"""
Calculate detection metrics at a specific confidence threshold.
Args:
predictions (list): List of prediction dictionaries
targets (list): List of target dictionaries
confidence_threshold (float): Confidence threshold to filter predictions
Returns:
dict: Dictionary with precision, recall, F1 score and detection count
"""
# Initialize counters
total_gt = 0
detections = 0
true_positives = 0
# Count total ground truth boxes
for target in targets:
total_gt += len(target["boxes"])
# Process all predictions with confidence filter
for pred, target in zip(predictions, targets):
# Filter predictions by confidence threshold
mask = pred["scores"] >= confidence_threshold
filtered_boxes = pred["boxes"][mask]
filtered_labels = pred["labels"][mask] if len(mask) > 0 else []
detections += len(filtered_boxes)
# Skip if no predictions after filtering
if len(filtered_boxes) == 0:
continue
# Calculate IoU with ground truth
gt_boxes = target["boxes"]
gt_labels = target["labels"]
# Skip if no ground truth
if len(gt_boxes) == 0:
continue
iou_matrix = box_iou(filtered_boxes, gt_boxes)
# Initialize array to track which gt boxes have been matched
gt_matched = torch.zeros(len(gt_boxes), dtype=torch.bool)
# Find matches based on IoU threshold of 0.5
for pred_idx in range(len(filtered_boxes)):
best_iou, best_gt_idx = torch.max(iou_matrix[pred_idx], dim=0)
if best_iou >= 0.5 and not gt_matched[best_gt_idx]:
if (
len(filtered_labels) > 0
and filtered_labels[pred_idx] == gt_labels[best_gt_idx]
):
true_positives += 1
gt_matched[best_gt_idx] = True
# Calculate metrics
precision = true_positives / max(detections, 1)
recall = true_positives / max(total_gt, 1)
# Cap metrics for final reporting
precision = min(1.0, precision)
recall = min(1.0, recall)
f1_score = 2 * precision * recall / max(precision + recall, 1e-6)
return {
"precision": precision,
"recall": recall,
"f1_score": f1_score,
"detections": detections,
"true_positives": true_positives,
}
def calculate_size_based_metrics(predictions, targets):
"""
Calculate detection performance by object size.
Args:
predictions (list): List of prediction dictionaries
targets (list): List of target dictionaries
Returns:
dict: Dictionary with size-based metrics
"""
# Define size categories (in pixels²)
small_threshold = 32 * 32 # Small objects: area < 32²
medium_threshold = 96 * 96 # Medium objects: 32² <= area < 96²
# Large objects: area >= 96²
# Initialize counters for each size category
size_metrics = {
"small_recall": 0,
"small_precision": 0,
"small_count": 0,
"medium_recall": 0,
"medium_precision": 0,
"medium_count": 0,
"large_recall": 0,
"large_precision": 0,
"large_count": 0,
}
# Count by size
small_gt, medium_gt, large_gt = 0, 0, 0
small_tp, medium_tp, large_tp = 0, 0, 0
small_det, medium_det, large_det = 0, 0, 0
# Process all predictions
for pred, target in zip(predictions, targets):
pred_boxes = pred["boxes"]
pred_scores = pred["scores"]
gt_boxes = target["boxes"]
# Skip if no predictions or no ground truth
if len(pred_boxes) == 0 or len(gt_boxes) == 0:
continue
# Calculate areas for ground truth
gt_areas = target.get("area", None)
if gt_areas is None:
# Calculate if not provided
gt_areas = (gt_boxes[:, 2] - gt_boxes[:, 0]) * (
gt_boxes[:, 3] - gt_boxes[:, 1]
)
# Count ground truth by size
small_gt += torch.sum((gt_areas < small_threshold)).item()
medium_gt += torch.sum(
(gt_areas >= small_threshold) & (gt_areas < medium_threshold)
).item()
large_gt += torch.sum((gt_areas >= medium_threshold)).item()
# Calculate areas for predictions
pred_areas = (pred_boxes[:, 2] - pred_boxes[:, 0]) * (
pred_boxes[:, 3] - pred_boxes[:, 1]
)
# Count predictions by size (with confidence >= 0.5)
conf_mask = pred_scores >= 0.5
small_mask = (pred_areas < small_threshold) & conf_mask
medium_mask = (
(pred_areas >= small_threshold)
& (pred_areas < medium_threshold)
& conf_mask
)
large_mask = (pred_areas >= medium_threshold) & conf_mask
small_det += torch.sum(small_mask).item()
medium_det += torch.sum(medium_mask).item()
large_det += torch.sum(large_mask).item()
# Calculate IoU between predictions and ground truth
iou_matrix = box_iou(pred_boxes, gt_boxes)
# Initialize array to track which gt boxes have been matched
gt_matched = torch.zeros(len(gt_boxes), dtype=torch.bool)
# Sort predictions by confidence score (high to low)
sorted_indices = torch.argsort(pred_scores, descending=True)
# Count true positives by size
for idx in sorted_indices:
if pred_scores[idx] < 0.5: # Skip low confidence detections
continue
# Find best matching ground truth box
best_iou, best_gt_idx = torch.max(iou_matrix[idx], dim=0)
# Check if the prediction matches a ground truth box with IoU >= 0.5
if best_iou >= 0.5 and not gt_matched[best_gt_idx]:
gt_matched[best_gt_idx] = True
# Categorize true positive by size
area = gt_areas[best_gt_idx].item()
if area < small_threshold:
small_tp += 1
elif area < medium_threshold:
medium_tp += 1
else:
large_tp += 1
# Calculate metrics for each size category
size_metrics["small_precision"] = small_tp / max(small_det, 1)
size_metrics["small_recall"] = small_tp / max(small_gt, 1)
size_metrics["small_count"] = small_gt
size_metrics["medium_precision"] = medium_tp / max(medium_det, 1)
size_metrics["medium_recall"] = medium_tp / max(medium_gt, 1)
size_metrics["medium_count"] = medium_gt
size_metrics["large_precision"] = large_tp / max(large_det, 1)
size_metrics["large_recall"] = large_tp / max(large_gt, 1)
size_metrics["large_count"] = large_gt
# Cap metrics for final reporting
size_metrics["small_precision"] = min(1.0, size_metrics["small_precision"])
size_metrics["small_recall"] = min(1.0, size_metrics["small_recall"])
size_metrics["medium_precision"] = min(1.0, size_metrics["medium_precision"])
size_metrics["medium_recall"] = min(1.0, size_metrics["medium_recall"])
size_metrics["large_precision"] = min(1.0, size_metrics["large_precision"])
size_metrics["large_recall"] = min(1.0, size_metrics["large_recall"])
return size_metrics
# Example usage (can be removed or kept for testing):
if __name__ == "__main__":
# This is a dummy test and requires a model, dataloader, device

195
uv.lock generated
View File

@@ -24,6 +24,56 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 },
]
[[package]]
name = "contourpy"
version = "1.3.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "numpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/25/c2/fc7193cc5383637ff390a712e88e4ded0452c9fbcf84abe3de5ea3df1866/contourpy-1.3.1.tar.gz", hash = "sha256:dfd97abd83335045a913e3bcc4a09c0ceadbe66580cf573fe961f4a825efa699", size = 13465753 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/37/6b/175f60227d3e7f5f1549fcb374592be311293132207e451c3d7c654c25fb/contourpy-1.3.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:0ffa84be8e0bd33410b17189f7164c3589c229ce5db85798076a3fa136d0e509", size = 271494 },
{ url = "https://files.pythonhosted.org/packages/6b/6a/7833cfae2c1e63d1d8875a50fd23371394f540ce809d7383550681a1fa64/contourpy-1.3.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:805617228ba7e2cbbfb6c503858e626ab528ac2a32a04a2fe88ffaf6b02c32bc", size = 255444 },
{ url = "https://files.pythonhosted.org/packages/7f/b3/7859efce66eaca5c14ba7619791b084ed02d868d76b928ff56890d2d059d/contourpy-1.3.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ade08d343436a94e633db932e7e8407fe7de8083967962b46bdfc1b0ced39454", size = 307628 },
{ url = "https://files.pythonhosted.org/packages/48/b2/011415f5e3f0a50b1e285a0bf78eb5d92a4df000553570f0851b6e309076/contourpy-1.3.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:47734d7073fb4590b4a40122b35917cd77be5722d80683b249dac1de266aac80", size = 347271 },
{ url = "https://files.pythonhosted.org/packages/84/7d/ef19b1db0f45b151ac78c65127235239a8cf21a59d1ce8507ce03e89a30b/contourpy-1.3.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2ba94a401342fc0f8b948e57d977557fbf4d515f03c67682dd5c6191cb2d16ec", size = 318906 },
{ url = "https://files.pythonhosted.org/packages/ba/99/6794142b90b853a9155316c8f470d2e4821fe6f086b03e372aca848227dd/contourpy-1.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:efa874e87e4a647fd2e4f514d5e91c7d493697127beb95e77d2f7561f6905bd9", size = 323622 },
{ url = "https://files.pythonhosted.org/packages/3c/0f/37d2c84a900cd8eb54e105f4fa9aebd275e14e266736778bb5dccbf3bbbb/contourpy-1.3.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:1bf98051f1045b15c87868dbaea84f92408337d4f81d0e449ee41920ea121d3b", size = 1266699 },
{ url = "https://files.pythonhosted.org/packages/3a/8a/deb5e11dc7d9cc8f0f9c8b29d4f062203f3af230ba83c30a6b161a6effc9/contourpy-1.3.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:61332c87493b00091423e747ea78200659dc09bdf7fd69edd5e98cef5d3e9a8d", size = 1326395 },
{ url = "https://files.pythonhosted.org/packages/1a/35/7e267ae7c13aaf12322ccc493531f1e7f2eb8fba2927b9d7a05ff615df7a/contourpy-1.3.1-cp312-cp312-win32.whl", hash = "sha256:e914a8cb05ce5c809dd0fe350cfbb4e881bde5e2a38dc04e3afe1b3e58bd158e", size = 175354 },
{ url = "https://files.pythonhosted.org/packages/a1/35/c2de8823211d07e8a79ab018ef03960716c5dff6f4d5bff5af87fd682992/contourpy-1.3.1-cp312-cp312-win_amd64.whl", hash = "sha256:08d9d449a61cf53033612cb368f3a1b26cd7835d9b8cd326647efe43bca7568d", size = 220971 },
{ url = "https://files.pythonhosted.org/packages/9a/e7/de62050dce687c5e96f946a93546910bc67e483fe05324439e329ff36105/contourpy-1.3.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:a761d9ccfc5e2ecd1bf05534eda382aa14c3e4f9205ba5b1684ecfe400716ef2", size = 271548 },
{ url = "https://files.pythonhosted.org/packages/78/4d/c2a09ae014ae984c6bdd29c11e74d3121b25eaa117eca0bb76340efd7e1c/contourpy-1.3.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:523a8ee12edfa36f6d2a49407f705a6ef4c5098de4f498619787e272de93f2d5", size = 255576 },
{ url = "https://files.pythonhosted.org/packages/ab/8a/915380ee96a5638bda80cd061ccb8e666bfdccea38d5741cb69e6dbd61fc/contourpy-1.3.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece6df05e2c41bd46776fbc712e0996f7c94e0d0543af1656956d150c4ca7c81", size = 306635 },
{ url = "https://files.pythonhosted.org/packages/29/5c/c83ce09375428298acd4e6582aeb68b1e0d1447f877fa993d9bf6cd3b0a0/contourpy-1.3.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:573abb30e0e05bf31ed067d2f82500ecfdaec15627a59d63ea2d95714790f5c2", size = 345925 },
{ url = "https://files.pythonhosted.org/packages/29/63/5b52f4a15e80c66c8078a641a3bfacd6e07106835682454647aca1afc852/contourpy-1.3.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a9fa36448e6a3a1a9a2ba23c02012c43ed88905ec80163f2ffe2421c7192a5d7", size = 318000 },
{ url = "https://files.pythonhosted.org/packages/9a/e2/30ca086c692691129849198659bf0556d72a757fe2769eb9620a27169296/contourpy-1.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ea9924d28fc5586bf0b42d15f590b10c224117e74409dd7a0be3b62b74a501c", size = 322689 },
{ url = "https://files.pythonhosted.org/packages/6b/77/f37812ef700f1f185d348394debf33f22d531e714cf6a35d13d68a7003c7/contourpy-1.3.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5b75aa69cb4d6f137b36f7eb2ace9280cfb60c55dc5f61c731fdf6f037f958a3", size = 1268413 },
{ url = "https://files.pythonhosted.org/packages/3f/6d/ce84e79cdd128542ebeb268f84abb4b093af78e7f8ec504676673d2675bc/contourpy-1.3.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:041b640d4ec01922083645a94bb3b2e777e6b626788f4095cf21abbe266413c1", size = 1326530 },
{ url = "https://files.pythonhosted.org/packages/72/22/8282f4eae20c73c89bee7a82a19c4e27af9b57bb602ecaa00713d5bdb54d/contourpy-1.3.1-cp313-cp313-win32.whl", hash = "sha256:36987a15e8ace5f58d4d5da9dca82d498c2bbb28dff6e5d04fbfcc35a9cb3a82", size = 175315 },
{ url = "https://files.pythonhosted.org/packages/e3/d5/28bca491f65312b438fbf076589dcde7f6f966b196d900777f5811b9c4e2/contourpy-1.3.1-cp313-cp313-win_amd64.whl", hash = "sha256:a7895f46d47671fa7ceec40f31fae721da51ad34bdca0bee83e38870b1f47ffd", size = 220987 },
{ url = "https://files.pythonhosted.org/packages/2f/24/a4b285d6adaaf9746e4700932f579f1a7b6f9681109f694cfa233ae75c4e/contourpy-1.3.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:9ddeb796389dadcd884c7eb07bd14ef12408aaae358f0e2ae24114d797eede30", size = 285001 },
{ url = "https://files.pythonhosted.org/packages/48/1d/fb49a401b5ca4f06ccf467cd6c4f1fd65767e63c21322b29b04ec40b40b9/contourpy-1.3.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:19c1555a6801c2f084c7ddc1c6e11f02eb6a6016ca1318dd5452ba3f613a1751", size = 268553 },
{ url = "https://files.pythonhosted.org/packages/79/1e/4aef9470d13fd029087388fae750dccb49a50c012a6c8d1d634295caa644/contourpy-1.3.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:841ad858cff65c2c04bf93875e384ccb82b654574a6d7f30453a04f04af71342", size = 310386 },
{ url = "https://files.pythonhosted.org/packages/b0/34/910dc706ed70153b60392b5305c708c9810d425bde12499c9184a1100888/contourpy-1.3.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4318af1c925fb9a4fb190559ef3eec206845f63e80fb603d47f2d6d67683901c", size = 349806 },
{ url = "https://files.pythonhosted.org/packages/31/3c/faee6a40d66d7f2a87f7102236bf4780c57990dd7f98e5ff29881b1b1344/contourpy-1.3.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:14c102b0eab282427b662cb590f2e9340a9d91a1c297f48729431f2dcd16e14f", size = 321108 },
{ url = "https://files.pythonhosted.org/packages/17/69/390dc9b20dd4bb20585651d7316cc3054b7d4a7b4f8b710b2b698e08968d/contourpy-1.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:05e806338bfeaa006acbdeba0ad681a10be63b26e1b17317bfac3c5d98f36cda", size = 327291 },
{ url = "https://files.pythonhosted.org/packages/ef/74/7030b67c4e941fe1e5424a3d988080e83568030ce0355f7c9fc556455b01/contourpy-1.3.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:4d76d5993a34ef3df5181ba3c92fabb93f1eaa5729504fb03423fcd9f3177242", size = 1263752 },
{ url = "https://files.pythonhosted.org/packages/f0/ed/92d86f183a8615f13f6b9cbfc5d4298a509d6ce433432e21da838b4b63f4/contourpy-1.3.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:89785bb2a1980c1bd87f0cb1517a71cde374776a5f150936b82580ae6ead44a1", size = 1318403 },
{ url = "https://files.pythonhosted.org/packages/b3/0e/c8e4950c77dcfc897c71d61e56690a0a9df39543d2164040301b5df8e67b/contourpy-1.3.1-cp313-cp313t-win32.whl", hash = "sha256:8eb96e79b9f3dcadbad2a3891672f81cdcab7f95b27f28f1c67d75f045b6b4f1", size = 185117 },
{ url = "https://files.pythonhosted.org/packages/c1/31/1ae946f11dfbd229222e6d6ad8e7bd1891d3d48bde5fbf7a0beb9491f8e3/contourpy-1.3.1-cp313-cp313t-win_amd64.whl", hash = "sha256:287ccc248c9e0d0566934e7d606201abd74761b5703d804ff3df8935f523d546", size = 236668 },
]
[[package]]
name = "cycler"
version = "0.12.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a9/95/a3dbbb5028f35eafb79008e7522a75244477d2838f38cbb722248dabc2a8/cycler-0.12.1.tar.gz", hash = "sha256:88bb128f02ba341da8ef447245a9e138fae777f6a23943da4540077d3601eb1c", size = 7615 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/05/c19819d5e3d95294a6f5947fb9b9629efb316b96de511b418c53d245aae6/cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30", size = 8321 },
]
[[package]]
name = "distlib"
version = "0.3.9"
@@ -42,6 +92,31 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4d/36/2a115987e2d8c300a974597416d9de88f2444426de9571f4b59b2cca3acc/filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de", size = 16215 },
]
[[package]]
name = "fonttools"
version = "4.57.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/03/2d/a9a0b6e3a0cf6bd502e64fc16d894269011930cabfc89aee20d1635b1441/fonttools-4.57.0.tar.gz", hash = "sha256:727ece10e065be2f9dd239d15dd5d60a66e17eac11aea47d447f9f03fdbc42de", size = 3492448 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/98/d4bc42d43392982eecaaca117d79845734d675219680cd43070bb001bc1f/fonttools-4.57.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:889e45e976c74abc7256d3064aa7c1295aa283c6bb19810b9f8b604dfe5c7f31", size = 2751824 },
{ url = "https://files.pythonhosted.org/packages/1a/62/7168030eeca3742fecf45f31e63b5ef48969fa230a672216b805f1d61548/fonttools-4.57.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:0425c2e052a5f1516c94e5855dbda706ae5a768631e9fcc34e57d074d1b65b92", size = 2283072 },
{ url = "https://files.pythonhosted.org/packages/5d/82/121a26d9646f0986ddb35fbbaf58ef791c25b59ecb63ffea2aab0099044f/fonttools-4.57.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:44c26a311be2ac130f40a96769264809d3b0cb297518669db437d1cc82974888", size = 4788020 },
{ url = "https://files.pythonhosted.org/packages/5b/26/e0f2fb662e022d565bbe280a3cfe6dafdaabf58889ff86fdef2d31ff1dde/fonttools-4.57.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84c41ba992df5b8d680b89fd84c6a1f2aca2b9f1ae8a67400c8930cd4ea115f6", size = 4859096 },
{ url = "https://files.pythonhosted.org/packages/9e/44/9075e323347b1891cdece4b3f10a3b84a8f4c42a7684077429d9ce842056/fonttools-4.57.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ea1e9e43ca56b0c12440a7c689b1350066595bebcaa83baad05b8b2675129d98", size = 4964356 },
{ url = "https://files.pythonhosted.org/packages/48/28/caa8df32743462fb966be6de6a79d7f30393859636d7732e82efa09fbbb4/fonttools-4.57.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:84fd56c78d431606332a0627c16e2a63d243d0d8b05521257d77c6529abe14d8", size = 5226546 },
{ url = "https://files.pythonhosted.org/packages/f6/46/95ab0f0d2e33c5b1a4fc1c0efe5e286ba9359602c0a9907adb1faca44175/fonttools-4.57.0-cp312-cp312-win32.whl", hash = "sha256:f4376819c1c778d59e0a31db5dc6ede854e9edf28bbfa5b756604727f7f800ac", size = 2146776 },
{ url = "https://files.pythonhosted.org/packages/06/5d/1be5424bb305880e1113631f49a55ea7c7da3a5fe02608ca7c16a03a21da/fonttools-4.57.0-cp312-cp312-win_amd64.whl", hash = "sha256:57e30241524879ea10cdf79c737037221f77cc126a8cdc8ff2c94d4a522504b9", size = 2193956 },
{ url = "https://files.pythonhosted.org/packages/e9/2f/11439f3af51e4bb75ac9598c29f8601aa501902dcedf034bdc41f47dd799/fonttools-4.57.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:408ce299696012d503b714778d89aa476f032414ae57e57b42e4b92363e0b8ef", size = 2739175 },
{ url = "https://files.pythonhosted.org/packages/25/52/677b55a4c0972dc3820c8dba20a29c358197a78229daa2ea219fdb19e5d5/fonttools-4.57.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:bbceffc80aa02d9e8b99f2a7491ed8c4a783b2fc4020119dc405ca14fb5c758c", size = 2276583 },
{ url = "https://files.pythonhosted.org/packages/64/79/184555f8fa77b827b9460a4acdbbc0b5952bb6915332b84c615c3a236826/fonttools-4.57.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f022601f3ee9e1f6658ed6d184ce27fa5216cee5b82d279e0f0bde5deebece72", size = 4766437 },
{ url = "https://files.pythonhosted.org/packages/f8/ad/c25116352f456c0d1287545a7aa24e98987b6d99c5b0456c4bd14321f20f/fonttools-4.57.0-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4dea5893b58d4637ffa925536462ba626f8a1b9ffbe2f5c272cdf2c6ebadb817", size = 4838431 },
{ url = "https://files.pythonhosted.org/packages/53/ae/398b2a833897297797a44f519c9af911c2136eb7aa27d3f1352c6d1129fa/fonttools-4.57.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:dff02c5c8423a657c550b48231d0a48d7e2b2e131088e55983cfe74ccc2c7cc9", size = 4951011 },
{ url = "https://files.pythonhosted.org/packages/b7/5d/7cb31c4bc9ffb9a2bbe8b08f8f53bad94aeb158efad75da645b40b62cb73/fonttools-4.57.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:767604f244dc17c68d3e2dbf98e038d11a18abc078f2d0f84b6c24571d9c0b13", size = 5205679 },
{ url = "https://files.pythonhosted.org/packages/4c/e4/6934513ec2c4d3d69ca1bc3bd34d5c69dafcbf68c15388dd3bb062daf345/fonttools-4.57.0-cp313-cp313-win32.whl", hash = "sha256:8e2e12d0d862f43d51e5afb8b9751c77e6bec7d2dc00aad80641364e9df5b199", size = 2144833 },
{ url = "https://files.pythonhosted.org/packages/c4/0d/2177b7fdd23d017bcfb702fd41e47d4573766b9114da2fddbac20dcc4957/fonttools-4.57.0-cp313-cp313-win_amd64.whl", hash = "sha256:f1d6bc9c23356908db712d282acb3eebd4ae5ec6d8b696aa40342b1d84f8e9e3", size = 2190799 },
{ url = "https://files.pythonhosted.org/packages/90/27/45f8957c3132917f91aaa56b700bcfc2396be1253f685bd5c68529b6f610/fonttools-4.57.0-py3-none-any.whl", hash = "sha256:3122c604a675513c68bd24c6a8f9091f1c2376d18e8f5fe5a101746c81b3e98f", size = 1093605 },
]
[[package]]
name = "fsspec"
version = "2025.3.2"
@@ -81,6 +156,57 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67", size = 134899 },
]
[[package]]
name = "kiwisolver"
version = "1.4.8"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/82/59/7c91426a8ac292e1cdd53a63b6d9439abd573c875c3f92c146767dd33faf/kiwisolver-1.4.8.tar.gz", hash = "sha256:23d5f023bdc8c7e54eb65f03ca5d5bb25b601eac4d7f1a042888a1f45237987e", size = 97538 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fc/aa/cea685c4ab647f349c3bc92d2daf7ae34c8e8cf405a6dcd3a497f58a2ac3/kiwisolver-1.4.8-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:d6af5e8815fd02997cb6ad9bbed0ee1e60014438ee1a5c2444c96f87b8843502", size = 124152 },
{ url = "https://files.pythonhosted.org/packages/c5/0b/8db6d2e2452d60d5ebc4ce4b204feeb16176a851fd42462f66ade6808084/kiwisolver-1.4.8-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:bade438f86e21d91e0cf5dd7c0ed00cda0f77c8c1616bd83f9fc157fa6760d31", size = 66555 },
{ url = "https://files.pythonhosted.org/packages/60/26/d6a0db6785dd35d3ba5bf2b2df0aedc5af089962c6eb2cbf67a15b81369e/kiwisolver-1.4.8-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b83dc6769ddbc57613280118fb4ce3cd08899cc3369f7d0e0fab518a7cf37fdb", size = 65067 },
{ url = "https://files.pythonhosted.org/packages/c9/ed/1d97f7e3561e09757a196231edccc1bcf59d55ddccefa2afc9c615abd8e0/kiwisolver-1.4.8-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:111793b232842991be367ed828076b03d96202c19221b5ebab421ce8bcad016f", size = 1378443 },
{ url = "https://files.pythonhosted.org/packages/29/61/39d30b99954e6b46f760e6289c12fede2ab96a254c443639052d1b573fbc/kiwisolver-1.4.8-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:257af1622860e51b1a9d0ce387bf5c2c4f36a90594cb9514f55b074bcc787cfc", size = 1472728 },
{ url = "https://files.pythonhosted.org/packages/0c/3e/804163b932f7603ef256e4a715e5843a9600802bb23a68b4e08c8c0ff61d/kiwisolver-1.4.8-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:69b5637c3f316cab1ec1c9a12b8c5f4750a4c4b71af9157645bf32830e39c03a", size = 1478388 },
{ url = "https://files.pythonhosted.org/packages/8a/9e/60eaa75169a154700be74f875a4d9961b11ba048bef315fbe89cb6999056/kiwisolver-1.4.8-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:782bb86f245ec18009890e7cb8d13a5ef54dcf2ebe18ed65f795e635a96a1c6a", size = 1413849 },
{ url = "https://files.pythonhosted.org/packages/bc/b3/9458adb9472e61a998c8c4d95cfdfec91c73c53a375b30b1428310f923e4/kiwisolver-1.4.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc978a80a0db3a66d25767b03688f1147a69e6237175c0f4ffffaaedf744055a", size = 1475533 },
{ url = "https://files.pythonhosted.org/packages/e4/7a/0a42d9571e35798de80aef4bb43a9b672aa7f8e58643d7bd1950398ffb0a/kiwisolver-1.4.8-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:36dbbfd34838500a31f52c9786990d00150860e46cd5041386f217101350f0d3", size = 2268898 },
{ url = "https://files.pythonhosted.org/packages/d9/07/1255dc8d80271400126ed8db35a1795b1a2c098ac3a72645075d06fe5c5d/kiwisolver-1.4.8-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:eaa973f1e05131de5ff3569bbba7f5fd07ea0595d3870ed4a526d486fe57fa1b", size = 2425605 },
{ url = "https://files.pythonhosted.org/packages/84/df/5a3b4cf13780ef6f6942df67b138b03b7e79e9f1f08f57c49957d5867f6e/kiwisolver-1.4.8-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:a66f60f8d0c87ab7f59b6fb80e642ebb29fec354a4dfad687ca4092ae69d04f4", size = 2375801 },
{ url = "https://files.pythonhosted.org/packages/8f/10/2348d068e8b0f635c8c86892788dac7a6b5c0cb12356620ab575775aad89/kiwisolver-1.4.8-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:858416b7fb777a53f0c59ca08190ce24e9abbd3cffa18886a5781b8e3e26f65d", size = 2520077 },
{ url = "https://files.pythonhosted.org/packages/32/d8/014b89fee5d4dce157d814303b0fce4d31385a2af4c41fed194b173b81ac/kiwisolver-1.4.8-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:085940635c62697391baafaaeabdf3dd7a6c3643577dde337f4d66eba021b2b8", size = 2338410 },
{ url = "https://files.pythonhosted.org/packages/bd/72/dfff0cc97f2a0776e1c9eb5bef1ddfd45f46246c6533b0191887a427bca5/kiwisolver-1.4.8-cp312-cp312-win_amd64.whl", hash = "sha256:01c3d31902c7db5fb6182832713d3b4122ad9317c2c5877d0539227d96bb2e50", size = 71853 },
{ url = "https://files.pythonhosted.org/packages/dc/85/220d13d914485c0948a00f0b9eb419efaf6da81b7d72e88ce2391f7aed8d/kiwisolver-1.4.8-cp312-cp312-win_arm64.whl", hash = "sha256:a3c44cb68861de93f0c4a8175fbaa691f0aa22550c331fefef02b618a9dcb476", size = 65424 },
{ url = "https://files.pythonhosted.org/packages/79/b3/e62464a652f4f8cd9006e13d07abad844a47df1e6537f73ddfbf1bc997ec/kiwisolver-1.4.8-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:1c8ceb754339793c24aee1c9fb2485b5b1f5bb1c2c214ff13368431e51fc9a09", size = 124156 },
{ url = "https://files.pythonhosted.org/packages/8d/2d/f13d06998b546a2ad4f48607a146e045bbe48030774de29f90bdc573df15/kiwisolver-1.4.8-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:54a62808ac74b5e55a04a408cda6156f986cefbcf0ada13572696b507cc92fa1", size = 66555 },
{ url = "https://files.pythonhosted.org/packages/59/e3/b8bd14b0a54998a9fd1e8da591c60998dc003618cb19a3f94cb233ec1511/kiwisolver-1.4.8-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:68269e60ee4929893aad82666821aaacbd455284124817af45c11e50a4b42e3c", size = 65071 },
{ url = "https://files.pythonhosted.org/packages/f0/1c/6c86f6d85ffe4d0ce04228d976f00674f1df5dc893bf2dd4f1928748f187/kiwisolver-1.4.8-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:34d142fba9c464bc3bbfeff15c96eab0e7310343d6aefb62a79d51421fcc5f1b", size = 1378053 },
{ url = "https://files.pythonhosted.org/packages/4e/b9/1c6e9f6dcb103ac5cf87cb695845f5fa71379021500153566d8a8a9fc291/kiwisolver-1.4.8-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ddc373e0eef45b59197de815b1b28ef89ae3955e7722cc9710fb91cd77b7f47", size = 1472278 },
{ url = "https://files.pythonhosted.org/packages/ee/81/aca1eb176de671f8bda479b11acdc42c132b61a2ac861c883907dde6debb/kiwisolver-1.4.8-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:77e6f57a20b9bd4e1e2cedda4d0b986ebd0216236f0106e55c28aea3d3d69b16", size = 1478139 },
{ url = "https://files.pythonhosted.org/packages/49/f4/e081522473671c97b2687d380e9e4c26f748a86363ce5af48b4a28e48d06/kiwisolver-1.4.8-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08e77738ed7538f036cd1170cbed942ef749137b1311fa2bbe2a7fda2f6bf3cc", size = 1413517 },
{ url = "https://files.pythonhosted.org/packages/8f/e9/6a7d025d8da8c4931522922cd706105aa32b3291d1add8c5427cdcd66e63/kiwisolver-1.4.8-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a5ce1e481a74b44dd5e92ff03ea0cb371ae7a0268318e202be06c8f04f4f1246", size = 1474952 },
{ url = "https://files.pythonhosted.org/packages/82/13/13fa685ae167bee5d94b415991c4fc7bb0a1b6ebea6e753a87044b209678/kiwisolver-1.4.8-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:fc2ace710ba7c1dfd1a3b42530b62b9ceed115f19a1656adefce7b1782a37794", size = 2269132 },
{ url = "https://files.pythonhosted.org/packages/ef/92/bb7c9395489b99a6cb41d502d3686bac692586db2045adc19e45ee64ed23/kiwisolver-1.4.8-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:3452046c37c7692bd52b0e752b87954ef86ee2224e624ef7ce6cb21e8c41cc1b", size = 2425997 },
{ url = "https://files.pythonhosted.org/packages/ed/12/87f0e9271e2b63d35d0d8524954145837dd1a6c15b62a2d8c1ebe0f182b4/kiwisolver-1.4.8-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:7e9a60b50fe8b2ec6f448fe8d81b07e40141bfced7f896309df271a0b92f80f3", size = 2376060 },
{ url = "https://files.pythonhosted.org/packages/02/6e/c8af39288edbce8bf0fa35dee427b082758a4b71e9c91ef18fa667782138/kiwisolver-1.4.8-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:918139571133f366e8362fa4a297aeba86c7816b7ecf0bc79168080e2bd79957", size = 2520471 },
{ url = "https://files.pythonhosted.org/packages/13/78/df381bc7b26e535c91469f77f16adcd073beb3e2dd25042efd064af82323/kiwisolver-1.4.8-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e063ef9f89885a1d68dd8b2e18f5ead48653176d10a0e324e3b0030e3a69adeb", size = 2338793 },
{ url = "https://files.pythonhosted.org/packages/d0/dc/c1abe38c37c071d0fc71c9a474fd0b9ede05d42f5a458d584619cfd2371a/kiwisolver-1.4.8-cp313-cp313-win_amd64.whl", hash = "sha256:a17b7c4f5b2c51bb68ed379defd608a03954a1845dfed7cc0117f1cc8a9b7fd2", size = 71855 },
{ url = "https://files.pythonhosted.org/packages/a0/b6/21529d595b126ac298fdd90b705d87d4c5693de60023e0efcb4f387ed99e/kiwisolver-1.4.8-cp313-cp313-win_arm64.whl", hash = "sha256:3cd3bc628b25f74aedc6d374d5babf0166a92ff1317f46267f12d2ed54bc1d30", size = 65430 },
{ url = "https://files.pythonhosted.org/packages/34/bd/b89380b7298e3af9b39f49334e3e2a4af0e04819789f04b43d560516c0c8/kiwisolver-1.4.8-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:370fd2df41660ed4e26b8c9d6bbcad668fbe2560462cba151a721d49e5b6628c", size = 126294 },
{ url = "https://files.pythonhosted.org/packages/83/41/5857dc72e5e4148eaac5aa76e0703e594e4465f8ab7ec0fc60e3a9bb8fea/kiwisolver-1.4.8-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:84a2f830d42707de1d191b9490ac186bf7997a9495d4e9072210a1296345f7dc", size = 67736 },
{ url = "https://files.pythonhosted.org/packages/e1/d1/be059b8db56ac270489fb0b3297fd1e53d195ba76e9bbb30e5401fa6b759/kiwisolver-1.4.8-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:7a3ad337add5148cf51ce0b55642dc551c0b9d6248458a757f98796ca7348712", size = 66194 },
{ url = "https://files.pythonhosted.org/packages/e1/83/4b73975f149819eb7dcf9299ed467eba068ecb16439a98990dcb12e63fdd/kiwisolver-1.4.8-cp313-cp313t-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7506488470f41169b86d8c9aeff587293f530a23a23a49d6bc64dab66bedc71e", size = 1465942 },
{ url = "https://files.pythonhosted.org/packages/c7/2c/30a5cdde5102958e602c07466bce058b9d7cb48734aa7a4327261ac8e002/kiwisolver-1.4.8-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f0121b07b356a22fb0414cec4666bbe36fd6d0d759db3d37228f496ed67c880", size = 1595341 },
{ url = "https://files.pythonhosted.org/packages/ff/9b/1e71db1c000385aa069704f5990574b8244cce854ecd83119c19e83c9586/kiwisolver-1.4.8-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d6d6bd87df62c27d4185de7c511c6248040afae67028a8a22012b010bc7ad062", size = 1598455 },
{ url = "https://files.pythonhosted.org/packages/85/92/c8fec52ddf06231b31cbb779af77e99b8253cd96bd135250b9498144c78b/kiwisolver-1.4.8-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:291331973c64bb9cce50bbe871fb2e675c4331dab4f31abe89f175ad7679a4d7", size = 1522138 },
{ url = "https://files.pythonhosted.org/packages/0b/51/9eb7e2cd07a15d8bdd976f6190c0164f92ce1904e5c0c79198c4972926b7/kiwisolver-1.4.8-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:893f5525bb92d3d735878ec00f781b2de998333659507d29ea4466208df37bed", size = 1582857 },
{ url = "https://files.pythonhosted.org/packages/0f/95/c5a00387a5405e68ba32cc64af65ce881a39b98d73cc394b24143bebc5b8/kiwisolver-1.4.8-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:b47a465040146981dc9db8647981b8cb96366fbc8d452b031e4f8fdffec3f26d", size = 2293129 },
{ url = "https://files.pythonhosted.org/packages/44/83/eeb7af7d706b8347548313fa3a3a15931f404533cc54fe01f39e830dd231/kiwisolver-1.4.8-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:99cea8b9dd34ff80c521aef46a1dddb0dcc0283cf18bde6d756f1e6f31772165", size = 2421538 },
{ url = "https://files.pythonhosted.org/packages/05/f9/27e94c1b3eb29e6933b6986ffc5fa1177d2cd1f0c8efc5f02c91c9ac61de/kiwisolver-1.4.8-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:151dffc4865e5fe6dafce5480fab84f950d14566c480c08a53c663a0020504b6", size = 2390661 },
{ url = "https://files.pythonhosted.org/packages/d9/d4/3c9735faa36ac591a4afcc2980d2691000506050b7a7e80bcfe44048daa7/kiwisolver-1.4.8-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:577facaa411c10421314598b50413aa1ebcf5126f704f1e5d72d7e4e9f020d90", size = 2546710 },
{ url = "https://files.pythonhosted.org/packages/4c/fa/be89a49c640930180657482a74970cdcf6f7072c8d2471e1babe17a222dc/kiwisolver-1.4.8-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:be4816dc51c8a471749d664161b434912eee82f2ea66bd7628bd14583a833e85", size = 2349213 },
]
[[package]]
name = "markupsafe"
version = "3.0.2"
@@ -119,6 +245,43 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739 },
]
[[package]]
name = "matplotlib"
version = "3.10.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "contourpy" },
{ name = "cycler" },
{ name = "fonttools" },
{ name = "kiwisolver" },
{ name = "numpy" },
{ name = "packaging" },
{ name = "pillow" },
{ name = "pyparsing" },
{ name = "python-dateutil" },
]
sdist = { url = "https://files.pythonhosted.org/packages/2f/08/b89867ecea2e305f408fbb417139a8dd941ecf7b23a2e02157c36da546f0/matplotlib-3.10.1.tar.gz", hash = "sha256:e8d2d0e3881b129268585bf4765ad3ee73a4591d77b9a18c214ac7e3a79fb2ba", size = 36743335 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7c/1d/5e0dc3b59c034e43de16f94deb68f4ad8a96b3ea00f4b37c160b7474928e/matplotlib-3.10.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:66e907a06e68cb6cfd652c193311d61a12b54f56809cafbed9736ce5ad92f107", size = 8175488 },
{ url = "https://files.pythonhosted.org/packages/7a/81/dae7e14042e74da658c3336ab9799128e09a1ee03964f2d89630b5d12106/matplotlib-3.10.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:e9b4bb156abb8fa5e5b2b460196f7db7264fc6d62678c03457979e7d5254b7be", size = 8046264 },
{ url = "https://files.pythonhosted.org/packages/21/c4/22516775dcde10fc9c9571d155f90710761b028fc44f660508106c363c97/matplotlib-3.10.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1985ad3d97f51307a2cbfc801a930f120def19ba22864182dacef55277102ba6", size = 8452048 },
{ url = "https://files.pythonhosted.org/packages/63/23/c0615001f67ce7c96b3051d856baedc0c818a2ed84570b9bf9bde200f85d/matplotlib-3.10.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c96f2c2f825d1257e437a1482c5a2cf4fee15db4261bd6fc0750f81ba2b4ba3d", size = 8597111 },
{ url = "https://files.pythonhosted.org/packages/ca/c0/a07939a82aed77770514348f4568177d7dadab9787ebc618a616fe3d665e/matplotlib-3.10.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:35e87384ee9e488d8dd5a2dd7baf471178d38b90618d8ea147aced4ab59c9bea", size = 9402771 },
{ url = "https://files.pythonhosted.org/packages/a6/b6/a9405484fb40746fdc6ae4502b16a9d6e53282ba5baaf9ebe2da579f68c4/matplotlib-3.10.1-cp312-cp312-win_amd64.whl", hash = "sha256:cfd414bce89cc78a7e1d25202e979b3f1af799e416010a20ab2b5ebb3a02425c", size = 8063742 },
{ url = "https://files.pythonhosted.org/packages/60/73/6770ff5e5523d00f3bc584acb6031e29ee5c8adc2336b16cd1d003675fe0/matplotlib-3.10.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:c42eee41e1b60fd83ee3292ed83a97a5f2a8239b10c26715d8a6172226988d7b", size = 8176112 },
{ url = "https://files.pythonhosted.org/packages/08/97/b0ca5da0ed54a3f6599c3ab568bdda65269bc27c21a2c97868c1625e4554/matplotlib-3.10.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4f0647b17b667ae745c13721602b540f7aadb2a32c5b96e924cd4fea5dcb90f1", size = 8046931 },
{ url = "https://files.pythonhosted.org/packages/df/9a/1acbdc3b165d4ce2dcd2b1a6d4ffb46a7220ceee960c922c3d50d8514067/matplotlib-3.10.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa3854b5f9473564ef40a41bc922be978fab217776e9ae1545c9b3a5cf2092a3", size = 8453422 },
{ url = "https://files.pythonhosted.org/packages/51/d0/2bc4368abf766203e548dc7ab57cf7e9c621f1a3c72b516cc7715347b179/matplotlib-3.10.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e496c01441be4c7d5f96d4e40f7fca06e20dcb40e44c8daa2e740e1757ad9e6", size = 8596819 },
{ url = "https://files.pythonhosted.org/packages/ab/1b/8b350f8a1746c37ab69dda7d7528d1fc696efb06db6ade9727b7887be16d/matplotlib-3.10.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5d45d3f5245be5b469843450617dcad9af75ca50568acf59997bed9311131a0b", size = 9402782 },
{ url = "https://files.pythonhosted.org/packages/89/06/f570373d24d93503988ba8d04f213a372fa1ce48381c5eb15da985728498/matplotlib-3.10.1-cp313-cp313-win_amd64.whl", hash = "sha256:8e8e25b1209161d20dfe93037c8a7f7ca796ec9aa326e6e4588d8c4a5dd1e473", size = 8063812 },
{ url = "https://files.pythonhosted.org/packages/fc/e0/8c811a925b5a7ad75135f0e5af46408b78af88bbb02a1df775100ef9bfef/matplotlib-3.10.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:19b06241ad89c3ae9469e07d77efa87041eac65d78df4fcf9cac318028009b01", size = 8214021 },
{ url = "https://files.pythonhosted.org/packages/4a/34/319ec2139f68ba26da9d00fce2ff9f27679fb799a6c8e7358539801fd629/matplotlib-3.10.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:01e63101ebb3014e6e9f80d9cf9ee361a8599ddca2c3e166c563628b39305dbb", size = 8090782 },
{ url = "https://files.pythonhosted.org/packages/77/ea/9812124ab9a99df5b2eec1110e9b2edc0b8f77039abf4c56e0a376e84a29/matplotlib-3.10.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3f06bad951eea6422ac4e8bdebcf3a70c59ea0a03338c5d2b109f57b64eb3972", size = 8478901 },
{ url = "https://files.pythonhosted.org/packages/c9/db/b05bf463689134789b06dea85828f8ebe506fa1e37593f723b65b86c9582/matplotlib-3.10.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a3dfb036f34873b46978f55e240cff7a239f6c4409eac62d8145bad3fc6ba5a3", size = 8613864 },
{ url = "https://files.pythonhosted.org/packages/c2/04/41ccec4409f3023a7576df3b5c025f1a8c8b81fbfe922ecfd837ac36e081/matplotlib-3.10.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:dc6ab14a7ab3b4d813b88ba957fc05c79493a037f54e246162033591e770de6f", size = 9409487 },
{ url = "https://files.pythonhosted.org/packages/ac/c2/0d5aae823bdcc42cc99327ecdd4d28585e15ccd5218c453b7bcd827f3421/matplotlib-3.10.1-cp313-cp313t-win_amd64.whl", hash = "sha256:bc411ebd5889a78dabbc457b3fa153203e22248bfa6eedc6797be5df0164dbf9", size = 8134832 },
]
[[package]]
name = "mpmath"
version = "1.3.0"
@@ -383,6 +546,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/88/74/a88bf1b1efeae488a0c0b7bdf71429c313722d1fc0f377537fbe554e6180/pre_commit-4.2.0-py2.py3-none-any.whl", hash = "sha256:a009ca7205f1eb497d10b845e52c838a98b6cdd2102a6c8e4540e94ee75c58bd", size = 220707 },
]
[[package]]
name = "pyparsing"
version = "3.2.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/bb/22/f1129e69d94ffff626bdb5c835506b3a5b4f3d070f17ea295e12c2c6f60f/pyparsing-3.2.3.tar.gz", hash = "sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be", size = 1088608 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/05/e7/df2285f3d08fee213f2d041540fa4fc9ca6c2d44cf36d3a035bf2a8d2bcc/pyparsing-3.2.3-py3-none-any.whl", hash = "sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf", size = 111120 },
]
[[package]]
name = "pytest"
version = "8.3.5"
@@ -398,6 +570,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/30/3d/64ad57c803f1fa1e963a7946b6e0fea4a70df53c1a7fed304586539c2bac/pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820", size = 343634 },
]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
]
sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892 },
]
[[package]]
name = "pyyaml"
version = "6.0.2"
@@ -458,6 +642,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/54/21/f43f0a1fa8b06b32812e0975981f4677d28e0f3271601dc88ac5a5b83220/setuptools-78.1.0-py3-none-any.whl", hash = "sha256:3e386e96793c8702ae83d17b853fb93d3e09ef82ec62722e61da5cd22376dcd8", size = 1256108 },
]
[[package]]
name = "six"
version = "1.17.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050 },
]
[[package]]
name = "sympy"
version = "1.13.1"
@@ -570,6 +763,7 @@ name = "torchvision-vibecoding-project"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "matplotlib" },
{ name = "numpy" },
{ name = "pillow" },
{ name = "pytest" },
@@ -587,6 +781,7 @@ dev = [
[package.metadata]
requires-dist = [
{ name = "matplotlib", specifier = ">=3.10.1" },
{ name = "numpy", specifier = ">=2.2.4" },
{ name = "pillow", specifier = ">=11.1.0" },
{ name = "pytest", specifier = ">=8.3.5" },