Compare commits
2 Commits
05ec32bac1
...
2ccfe7b07f
| Author | SHA1 | Date | |
|---|---|---|---|
| 2ccfe7b07f | |||
|
|
17d3f419f6 |
126
README.md
126
README.md
@@ -17,6 +17,7 @@ IC 版图在匹配时可能出现多种方向(0°、90°、180°、270° 及
|
||||
* **训练脚本**:通过**几何感知损失函数**训练模型,学习**几何结构描述子**而非纹理特征,确保对二值化、稀疏性、重复结构的鲁棒性。
|
||||
* **评估脚本**:可在验证集上评估模型性能,**专门针对IC版图特征**计算几何一致性指标。
|
||||
* **匹配工具**:使用训练好的模型进行**几何结构匹配**,有效区分重复图形并支持多实例检测。
|
||||
* **灵活配置与日志**:引入 OmegaConf 驱动的 YAML 配置 (`configs/*.yaml`),配合 `utils.config_loader` 与 TensorBoard 监控实现参数/路径集中管理。
|
||||
|
||||
## 🛠️ 安装
|
||||
|
||||
@@ -46,20 +47,38 @@ pip install -e .
|
||||
### 📁 项目结构
|
||||
|
||||
```
|
||||
ic_layout_recognition/
|
||||
RoRD-Layout-Recognation/
|
||||
├── configs/
|
||||
│ └── base_config.yaml # YAML 配置入口
|
||||
├── data/
|
||||
│ └── ic_dataset.py
|
||||
│ └── ic_dataset.py # 数据集与数据接口
|
||||
├── docs/
|
||||
│ ├── data_description.md
|
||||
│ ├── feature_work.md
|
||||
│ ├── loss_function.md
|
||||
│ └── NextStep.md
|
||||
├── models/
|
||||
│ └── rord.py
|
||||
│ └── rord.py # RoRD 模型定义
|
||||
├── utils/
|
||||
│ ├── config_loader.py # YAML 配置加载与路径转换
|
||||
│ ├── data_utils.py
|
||||
│ └── transforms.py
|
||||
├── train.py
|
||||
├── evaluate.py
|
||||
├── match.py
|
||||
├── LICENSE.txt
|
||||
├── losses.py # 几何感知损失集合
|
||||
├── train.py # 训练脚本(YAML + TensorBoard)
|
||||
├── evaluate.py # 评估脚本
|
||||
├── match.py # 模板匹配脚本
|
||||
├── config.py # 兼容旧流程的 YAML 读取 shim
|
||||
├── pyproject.toml
|
||||
└── README.md
|
||||
```
|
||||
|
||||
### 🧩 配置与模块化更新
|
||||
|
||||
- **YAML 配置中心**:所有路径与超参数集中存放在 `configs/*.yaml`,通过 `utils.config_loader.load_config` 统一解析;CLI 的 `--config` 参数可切换实验配置,`to_absolute_path` 则保证相对路径相对配置文件解析。
|
||||
- **旧配置兼容**:`config.py` 现在仅作为兼容层,将 YAML 配置转换成原有的 Python 常量,便于逐步迁移历史代码。
|
||||
- **损失与数据解耦**:`losses.py` 汇总几何感知损失,`data/ic_dataset.py` 与 `utils/data_utils.py` 分离数据准备逻辑,便于扩展新的采样策略或损失项。
|
||||
- **日志体系**:`logging` 配置节配合 TensorBoard 集成,`train.py`、`evaluate.py`、`match.py` 可统一写入 `log_dir/子任务/experiment_name`。
|
||||
|
||||
## 🚀 使用方法
|
||||
|
||||
### 📋 训练准备清单
|
||||
@@ -77,13 +96,35 @@ ic_layout_recognition/
|
||||
```
|
||||
|
||||
#### 2. 配置文件修改
|
||||
编辑 `config.py` 文件,修改以下路径:
|
||||
```python
|
||||
# 必需修改的路径
|
||||
LAYOUT_DIR = '你的布局图像目录路径' # 训练数据目录
|
||||
SAVE_DIR = '你的模型和日志保存路径' # 输出目录
|
||||
项目默认从 `configs/base_config.yaml` 读取训练、评估与日志参数。建议复制该文件并按实验命名,例如:
|
||||
|
||||
```bash
|
||||
cp configs/base_config.yaml configs/exp_ic_baseline.yaml
|
||||
```
|
||||
|
||||
在 YAML 中修改路径与关键参数:
|
||||
|
||||
```yaml
|
||||
paths:
|
||||
layout_dir: "数据集/训练图像目录"
|
||||
save_dir: "输出目录(模型与日志)"
|
||||
val_img_dir: "验证集图像目录"
|
||||
val_ann_dir: "验证集标注目录"
|
||||
template_dir: "模板图像目录"
|
||||
|
||||
training:
|
||||
num_epochs: 50
|
||||
batch_size: 8
|
||||
learning_rate: 5.0e-5
|
||||
|
||||
logging:
|
||||
use_tensorboard: true
|
||||
log_dir: "runs"
|
||||
experiment_name: "baseline"
|
||||
```
|
||||
|
||||
> 保留 `config.py` 仅用于兼容旧版脚本;新流程全部通过 YAML + `utils.config_loader` 载入配置。
|
||||
|
||||
#### 3. 环境检查
|
||||
确保已正确安装所有依赖:
|
||||
```bash
|
||||
@@ -95,17 +136,20 @@ python -c "import cv2; print('OpenCV version:', cv2.__version__)"
|
||||
|
||||
#### 基础训练
|
||||
```bash
|
||||
python train.py --data_dir /path/to/your/layouts --save_dir /path/to/your/models
|
||||
uv run python train.py --config configs/exp_ic_baseline.yaml
|
||||
```
|
||||
|
||||
上述命令将读取 `configs/exp_ic_baseline.yaml` 中的路径和训练参数;若未指定 `--config`,脚本会回落到 `configs/base_config.yaml`。
|
||||
|
||||
#### 自定义训练参数
|
||||
```bash
|
||||
python train.py \
|
||||
--data_dir /path/to/your/layouts \
|
||||
--save_dir /path/to/your/models \
|
||||
--epochs 50 \
|
||||
--batch_size 8 \
|
||||
--lr 5e-5
|
||||
uv run python train.py \
|
||||
--config configs/exp_ic_baseline.yaml \
|
||||
--data_dir /override/layouts \
|
||||
--save_dir /override/models \
|
||||
--epochs 60 \
|
||||
--batch_size 16 \
|
||||
--lr 1e-4
|
||||
```
|
||||
|
||||
#### 查看所有可用参数
|
||||
@@ -119,19 +163,53 @@ python train.py --help
|
||||
- 最佳模型:`rord_model_best.pth`
|
||||
- 最终模型:`rord_model_final.pth`
|
||||
|
||||
### 🚀 快速开始示例
|
||||
### <EFBFBD> TensorBoard 实验追踪
|
||||
|
||||
`configs/base_config.yaml` 中新增的 `logging` 区块用于控制 TensorBoard:
|
||||
|
||||
```yaml
|
||||
logging:
|
||||
use_tensorboard: true # 是否启用 TensorBoard 记录
|
||||
log_dir: "runs" # 日志根目录(相对/绝对路径均可)
|
||||
experiment_name: "default" # 实验名称,将作为子目录名
|
||||
```
|
||||
|
||||
需要临时覆盖时,可在命令行传入参数(以下命令均可用 `uv run` 直接执行):
|
||||
|
||||
```bash
|
||||
uv run python train.py --log_dir logs --experiment_name exp001
|
||||
uv run python evaluate.py --log_dir logs --experiment_name exp001
|
||||
uv run python match.py --tb_log_matches --log_dir logs --experiment_name exp001
|
||||
uv run python train.py --disable_tensorboard # 如需关闭记录
|
||||
```
|
||||
|
||||
执行训练、评估或模板匹配后,通过下列命令启动 TensorBoard:
|
||||
|
||||
```bash
|
||||
uv run tensorboard --logdir runs
|
||||
```
|
||||
|
||||
TensorBoard 中将展示:
|
||||
|
||||
- `train.py`:损失、学习率、梯度范数等随时间变化曲线;
|
||||
- `evaluate.py`:精确率 / 召回率 / F1 分数;
|
||||
- `match.py`(配合 `--tb_log_matches`):每个匹配实例的内点数量、尺度和总检测数量。
|
||||
|
||||
### <20>🚀 快速开始示例
|
||||
```bash
|
||||
# 1. 安装依赖
|
||||
uv sync
|
||||
|
||||
# 2. 修改配置文件
|
||||
# 编辑 config.py 中的 LAYOUT_DIR 和 SAVE_DIR
|
||||
# 2. 复制并编辑 YAML 配置
|
||||
cp configs/base_config.yaml configs/exp_ic_baseline.yaml
|
||||
# 根据数据路径与实验需求调整 paths/training/logging 字段
|
||||
|
||||
# 3. 开始训练
|
||||
python train.py --data_dir ./data/layouts --save_dir ./output
|
||||
uv run python train.py --config configs/exp_ic_baseline.yaml
|
||||
|
||||
# 4. 使用训练好的模型进行匹配
|
||||
python match.py --model_path ./output/rord_model_final.pth \
|
||||
uv run python match.py --config configs/exp_ic_baseline.yaml \
|
||||
--model_path ./output/rord_model_final.pth \
|
||||
--layout ./test/layout.png \
|
||||
--template ./test/template.png \
|
||||
--output ./result.png
|
||||
|
||||
@@ -16,6 +16,11 @@ matching:
|
||||
evaluation:
|
||||
iou_threshold: 0.5
|
||||
|
||||
logging:
|
||||
use_tensorboard: true
|
||||
log_dir: "runs"
|
||||
experiment_name: "baseline"
|
||||
|
||||
paths:
|
||||
layout_dir: "path/to/layouts"
|
||||
save_dir: "path/to/save"
|
||||
|
||||
124
docs/NextStep.md
Normal file
124
docs/NextStep.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# 本地 TensorBoard 实验追踪方案
|
||||
|
||||
日期:2025-09-25
|
||||
|
||||
## 目标
|
||||
- 在本地工作站搭建一套轻量、低成本的实验追踪与可视化管道,覆盖训练、评估和模板匹配流程。
|
||||
- 结合现有 YAML 配置体系,为后续扩展(自动化调参、远程同步)保留接口。
|
||||
|
||||
## 环境与前置准备
|
||||
1. **系统与软件**
|
||||
- 操作系统:Ubuntu 22.04 / Windows 11 / macOS 14(任选其一)。
|
||||
- Python 环境:使用项目默认的 `uv` 虚拟环境(见 `uv.lock` / `pyproject.toml`)。
|
||||
2. **依赖安装**
|
||||
```bash
|
||||
uv add tensorboard tensorboardX
|
||||
```
|
||||
3. **目录规划**
|
||||
- 在项目根目录创建 `runs/`,建议按 `runs/<experiment_name>/` 组织日志。
|
||||
- 训练与评估可分别输出到 `runs/train/`、`runs/eval/` 子目录。
|
||||
|
||||
## 集成步骤
|
||||
### 1. 配置项扩展
|
||||
- 在 `configs/base_config.yaml` 中添加:
|
||||
```yaml
|
||||
logging:
|
||||
use_tensorboard: true
|
||||
log_dir: "runs"
|
||||
experiment_name: "baseline"
|
||||
```
|
||||
- 命令行新增 `--log-dir`、`--experiment-name` 参数,默认读取配置,可在执行时覆盖。
|
||||
|
||||
### 2. 训练脚本 `train.py`
|
||||
1. **初始化 SummaryWriter**
|
||||
```python
|
||||
from torch.utils.tensorboard import SummaryWriter
|
||||
|
||||
log_dir = Path(args.log_dir or cfg.logging.log_dir)
|
||||
experiment = args.experiment_name or cfg.logging.experiment_name
|
||||
writer = SummaryWriter(log_dir=log_dir / "train" / experiment)
|
||||
```
|
||||
2. **记录训练指标**(每个 iteration)
|
||||
```python
|
||||
global_step = epoch * len(train_dataloader) + i
|
||||
writer.add_scalar("loss/total", loss.item(), global_step)
|
||||
writer.add_scalar("loss/det", det_loss.item(), global_step)
|
||||
writer.add_scalar("loss/desc", desc_loss.item(), global_step)
|
||||
writer.add_scalar("optimizer/lr", scheduler.optimizer.param_groups[0]['lr'], global_step)
|
||||
```
|
||||
3. **验证阶段记录**
|
||||
- Epoch 结束后写入平均损失、F1 等指标。
|
||||
- 可视化关键点热力图、匹配示意图:`writer.add_image()`。
|
||||
4. **资源清理**
|
||||
- 训练结束调用 `writer.close()`。
|
||||
|
||||
### 3. 评估脚本 `evaluate.py`
|
||||
1. 初始化 `SummaryWriter(log_dir / "eval" / experiment)`。
|
||||
2. 收集所有验证样本的预测框 (boxes)、置信度 (scores) 与真实标注 (ground truth boxes)。
|
||||
3. 使用 `sklearn.metrics.average_precision_score` 或 `pycocotools` 计算每个样本的 Average Precision,并汇总为 mAP:
|
||||
```python
|
||||
from sklearn.metrics import average_precision_score
|
||||
ap = average_precision_score(y_true, y_scores)
|
||||
writer.add_scalar("eval/AP", ap, global_step)
|
||||
```
|
||||
4. 在成功匹配(IoU ≥ 阈值)后,从 `match_template_multiscale` 返回值中获取单应矩阵 `H`。
|
||||
5. 使用 `cv2.decomposeHomographyMat` 或手动分解方法,将 `H` 提取为旋转角度、平移向量和缩放因子:
|
||||
```python
|
||||
_, Rs, Ts, Ns = cv2.decomposeHomographyMat(H, np.eye(3))
|
||||
rot_angle = compute_angle(Rs[0])
|
||||
trans_vec = Ts[0]
|
||||
scale = np.linalg.norm(Ns[0])
|
||||
```
|
||||
6. 从标注文件中读取真实几何变换参数 (rotation_gt, trans_gt, scale_gt),计算误差:
|
||||
```python
|
||||
err_rot = abs(rot_angle - rotation_gt)
|
||||
err_trans = np.linalg.norm(trans_vec - trans_gt)
|
||||
err_scale = abs(scale - scale_gt)
|
||||
writer.add_scalar("eval/err_rot", err_rot, img_id)
|
||||
writer.add_scalar("eval/err_trans", err_trans, img_id)
|
||||
writer.add_scalar("eval/err_scale", err_scale, img_id)
|
||||
```
|
||||
7. 使用 `writer.add_histogram` 记录误差分布,并通过 `writer.add_image` 可选地上传误差直方图:
|
||||
```python
|
||||
writer.add_histogram("eval/err_rot_hist", err_rot_list, epoch)
|
||||
```
|
||||
8. 在 TensorBoard 的 Scalars、Histograms 和 Images 面板中分别查看指标曲线、误差分布及可视化结果。
|
||||
|
||||
### 4. 模板匹配调试 `match.py`
|
||||
- 新增参数 `--tb-log-matches`(布尔值)。
|
||||
- 启用后,将关键点分布、Homography 误差统计写入 `runs/match/<experiment>/`。
|
||||
|
||||
## 可视化与使用
|
||||
1. **启动 TensorBoard**
|
||||
```bash
|
||||
tensorboard --logdir runs --port 6006
|
||||
```
|
||||
- 浏览器访问 `http://localhost:6006`。
|
||||
- 若需局域网共享可加 `--bind_all`。
|
||||
2. **推荐面板布局**
|
||||
- Scalars:损失曲线、学习率、评估指标。
|
||||
- Images:关键点热力图、模板匹配结果。
|
||||
- Histograms:描述子分布、梯度范数(可选)。
|
||||
- Text:记录配置摘要、Git 提交信息。
|
||||
|
||||
## 版本控制与组织
|
||||
- 实验命名建议采用 `YYYYMMDD_project_variant`,方便检索。
|
||||
- 使用 `writer.add_text()` 保存关键配置和 CLI 参数,形成自描述日志。
|
||||
- 可开发 `tools/export_tb_summary.py` 导出曲线数据供文档或汇报使用。
|
||||
|
||||
## 进阶扩展
|
||||
1. **自动化脚本**:在 `Makefile` / `tasks.json` 中增加命令,一键启动训练 + TensorBoard。
|
||||
2. **远程访问**:通过 `ssh -L` 或 `ngrok` 转发端口,注意访问权限控制。
|
||||
3. **对比实验**:利用 TensorBoard `Compare Runs` 功能或统一父目录对比多次实验。
|
||||
4. **CI 集成**:在持续集成流程中生成日志,作为构建工件保存。
|
||||
|
||||
## 验证与维护
|
||||
- **功能自测**:运行 1–2 个 epoch,确认日志生成并正确展示。
|
||||
- **存储监控**:定期清理或压缩旧实验,避免磁盘占满。
|
||||
- **备份策略**:重要实验可打包日志或同步至远程仓库。
|
||||
- **团队培训**:在 README 中补充使用说明,组织示例演示。
|
||||
|
||||
## 下一步
|
||||
- [ ] 修改配置和脚本,接入 SummaryWriter。
|
||||
- [ ] 准备示例 Notebook/文档,展示 TensorBoard 面板截图。
|
||||
- [ ] 后续评估是否需要接入 W&B、MLflow 等更高级平台。
|
||||
@@ -79,13 +79,13 @@
|
||||
|
||||
> *目标:提升项目的可维护性、可扩展性和易用性。*
|
||||
|
||||
- [ ] **迁移配置到 YAML 文件**
|
||||
- [x] **迁移配置到 YAML 文件**
|
||||
- **✔️ 价值**: `config.py` 不利于管理多组实验配置。YAML 文件能让每组实验的参数独立、清晰,便于复现。
|
||||
- **📝 执行方案**:
|
||||
1. 创建一个 `configs` 目录,并编写一个 `base_config.yaml` 文件。
|
||||
2. 引入 `OmegaConf` 或 `Hydra` 库。
|
||||
3. 修改 `train.py` 和 `match.py` 等脚本,使其从 YAML 文件加载配置,而不是从 `config.py` 导入。
|
||||
- [ ] **代码模块解耦**
|
||||
- [x] **代码模块解耦**
|
||||
- **✔️ 价值**: `train.py` 文件过长,职责过多。解耦能使代码结构更清晰,符合单一职责原则。
|
||||
- **📝 执行方案**:
|
||||
1. 将 `ICLayoutTrainingDataset` 类从 `train.py` 移动到 `data/ic_dataset.py`。
|
||||
@@ -95,13 +95,13 @@
|
||||
|
||||
> *目标:建立科学的实验流程,提供更全面的模型性能度量。*
|
||||
|
||||
- [ ] **集成实验跟踪工具 (TensorBoard / W&B)**
|
||||
- [x] **集成实验跟踪工具 (TensorBoard / W&B)**
|
||||
- **✔️ 价值**: 日志文件不利于直观对比实验结果。可视化工具可以实时监控、比较多组实验的损失和评估指标。
|
||||
- **📝 执行方案**:
|
||||
1. 在 `train.py` 中,导入 `torch.utils.tensorboard.SummaryWriter`。
|
||||
2. 在训练循环中,使用 `writer.add_scalar()` 记录各项损失值。
|
||||
3. 在验证结束后,记录评估指标和学习率等信息。
|
||||
- [ ] **增加更全面的评估指标**
|
||||
- [x] **增加更全面的评估指标**
|
||||
- **✔️ 价值**: 当前的评估指标 主要关注检测框的重合度。增加 mAP 和几何误差评估能更全面地衡量模型性能。
|
||||
- **📝 执行方案**:
|
||||
1. 在 `evaluate.py` 中,实现 mAP (mean Average Precision) 的计算逻辑。
|
||||
|
||||
70
evaluate.py
70
evaluate.py
@@ -7,6 +7,7 @@ from pathlib import Path
|
||||
|
||||
import torch
|
||||
from PIL import Image
|
||||
from torch.utils.tensorboard import SummaryWriter
|
||||
|
||||
from match import match_template_multiscale
|
||||
from models.rord import RoRD
|
||||
@@ -23,7 +24,16 @@ def compute_iou(box1, box2):
|
||||
return inter_area / union_area if union_area > 0 else 0
|
||||
|
||||
# --- (已修改) 评估函数 ---
|
||||
def evaluate(model, val_dataset_dir, val_annotations_dir, template_dir, matching_cfg, iou_threshold):
|
||||
def evaluate(
|
||||
model,
|
||||
val_dataset_dir,
|
||||
val_annotations_dir,
|
||||
template_dir,
|
||||
matching_cfg,
|
||||
iou_threshold,
|
||||
summary_writer: SummaryWriter | None = None,
|
||||
global_step: int = 0,
|
||||
):
|
||||
model.eval()
|
||||
all_tp, all_fp, all_fn = 0, 0, 0
|
||||
|
||||
@@ -33,6 +43,13 @@ def evaluate(model, val_dataset_dir, val_annotations_dir, template_dir, matching
|
||||
template_paths = [os.path.join(template_dir, f) for f in os.listdir(template_dir) if f.endswith('.png')]
|
||||
layout_image_names = [f for f in os.listdir(val_dataset_dir) if f.endswith('.png')]
|
||||
|
||||
if summary_writer:
|
||||
summary_writer.add_text(
|
||||
"dataset/info",
|
||||
f"layouts={len(layout_image_names)}, templates={len(template_paths)}",
|
||||
global_step,
|
||||
)
|
||||
|
||||
# (已修改) 循环遍历验证集中的每个版图文件
|
||||
for layout_name in layout_image_names:
|
||||
print(f"\n正在评估版图: {layout_name}")
|
||||
@@ -92,6 +109,15 @@ def evaluate(model, val_dataset_dir, val_annotations_dir, template_dir, matching
|
||||
precision = all_tp / (all_tp + all_fp) if (all_tp + all_fp) > 0 else 0
|
||||
recall = all_tp / (all_tp + all_fn) if (all_tp + all_fn) > 0 else 0
|
||||
f1 = 2 * (precision * recall) / (precision + recall) if (precision + recall) > 0 else 0
|
||||
|
||||
if summary_writer:
|
||||
summary_writer.add_scalar("metrics/precision", precision, global_step)
|
||||
summary_writer.add_scalar("metrics/recall", recall, global_step)
|
||||
summary_writer.add_scalar("metrics/f1", f1, global_step)
|
||||
summary_writer.add_scalar("counts/true_positive", all_tp, global_step)
|
||||
summary_writer.add_scalar("counts/false_positive", all_fp, global_step)
|
||||
summary_writer.add_scalar("counts/false_negative", all_fn, global_step)
|
||||
|
||||
return {'precision': precision, 'recall': recall, 'f1': f1}
|
||||
|
||||
if __name__ == "__main__":
|
||||
@@ -101,6 +127,9 @@ if __name__ == "__main__":
|
||||
parser.add_argument('--val_dir', type=str, default=None, help="验证图像目录,若未提供则使用配置文件中的路径")
|
||||
parser.add_argument('--annotations_dir', type=str, default=None, help="验证标注目录,若未提供则使用配置文件中的路径")
|
||||
parser.add_argument('--templates_dir', type=str, default=None, help="模板目录,若未提供则使用配置文件中的路径")
|
||||
parser.add_argument('--log_dir', type=str, default=None, help="TensorBoard 日志根目录,覆盖配置文件设置")
|
||||
parser.add_argument('--experiment_name', type=str, default=None, help="TensorBoard 实验名称,覆盖配置文件设置")
|
||||
parser.add_argument('--disable_tensorboard', action='store_true', help="禁用 TensorBoard 记录")
|
||||
args = parser.parse_args()
|
||||
|
||||
cfg = load_config(args.config)
|
||||
@@ -108,6 +137,7 @@ if __name__ == "__main__":
|
||||
paths_cfg = cfg.paths
|
||||
matching_cfg = cfg.matching
|
||||
eval_cfg = cfg.evaluation
|
||||
logging_cfg = cfg.get("logging", None)
|
||||
|
||||
model_path = args.model_path or str(to_absolute_path(paths_cfg.model_path, config_dir))
|
||||
val_dir = args.val_dir or str(to_absolute_path(paths_cfg.val_img_dir, config_dir))
|
||||
@@ -115,12 +145,48 @@ if __name__ == "__main__":
|
||||
templates_dir = args.templates_dir or str(to_absolute_path(paths_cfg.template_dir, config_dir))
|
||||
iou_threshold = float(eval_cfg.iou_threshold)
|
||||
|
||||
use_tensorboard = False
|
||||
log_dir = None
|
||||
experiment_name = None
|
||||
if logging_cfg is not None:
|
||||
use_tensorboard = bool(logging_cfg.get("use_tensorboard", False))
|
||||
log_dir = logging_cfg.get("log_dir", "runs")
|
||||
experiment_name = logging_cfg.get("experiment_name", "default")
|
||||
|
||||
if args.disable_tensorboard:
|
||||
use_tensorboard = False
|
||||
if args.log_dir is not None:
|
||||
log_dir = args.log_dir
|
||||
if args.experiment_name is not None:
|
||||
experiment_name = args.experiment_name
|
||||
|
||||
writer = None
|
||||
if use_tensorboard and log_dir:
|
||||
log_root = Path(log_dir).expanduser()
|
||||
exp_folder = experiment_name or "default"
|
||||
tb_path = log_root / "eval" / exp_folder
|
||||
tb_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
writer = SummaryWriter(tb_path.as_posix())
|
||||
|
||||
model = RoRD().cuda()
|
||||
model.load_state_dict(torch.load(model_path))
|
||||
|
||||
results = evaluate(model, val_dir, annotations_dir, templates_dir, matching_cfg, iou_threshold)
|
||||
results = evaluate(
|
||||
model,
|
||||
val_dir,
|
||||
annotations_dir,
|
||||
templates_dir,
|
||||
matching_cfg,
|
||||
iou_threshold,
|
||||
summary_writer=writer,
|
||||
global_step=0,
|
||||
)
|
||||
|
||||
print("\n--- 评估结果 ---")
|
||||
print(f" 精确率 (Precision): {results['precision']:.4f}")
|
||||
print(f" 召回率 (Recall): {results['recall']:.4f}")
|
||||
print(f" F1 分数 (F1 Score): {results['f1']:.4f}")
|
||||
|
||||
if writer:
|
||||
writer.add_text("metadata/model_path", model_path)
|
||||
writer.close()
|
||||
69
match.py
69
match.py
@@ -9,6 +9,10 @@ import numpy as np
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
from PIL import Image
|
||||
try:
|
||||
from torch.utils.tensorboard import SummaryWriter
|
||||
except ImportError: # pragma: no cover - fallback for environments without torch tensorboard
|
||||
from tensorboardX import SummaryWriter # type: ignore
|
||||
|
||||
from models.rord import RoRD
|
||||
from utils.config_loader import load_config, to_absolute_path
|
||||
@@ -97,16 +101,28 @@ def mutual_nearest_neighbor(descs1, descs2):
|
||||
return matches
|
||||
|
||||
# --- (已修改) 多尺度、多实例匹配主函数 ---
|
||||
def match_template_multiscale(model, layout_image, template_image, transform, matching_cfg):
|
||||
def match_template_multiscale(
|
||||
model,
|
||||
layout_image,
|
||||
template_image,
|
||||
transform,
|
||||
matching_cfg,
|
||||
log_writer: SummaryWriter | None = None,
|
||||
log_step: int = 0,
|
||||
):
|
||||
"""
|
||||
在不同尺度下搜索模板,并检测多个实例
|
||||
"""
|
||||
# 1. 对大版图使用滑动窗口提取全部特征
|
||||
layout_kps, layout_descs = extract_features_sliding_window(model, layout_image, transform, matching_cfg)
|
||||
if log_writer:
|
||||
log_writer.add_scalar("match/layout_keypoints", len(layout_kps), log_step)
|
||||
|
||||
min_inliers = int(matching_cfg.min_inliers)
|
||||
if len(layout_kps) < min_inliers:
|
||||
print("从大版图中提取的关键点过少,无法进行匹配。")
|
||||
if log_writer:
|
||||
log_writer.add_scalar("match/instances_found", 0, log_step)
|
||||
return []
|
||||
|
||||
found_instances = []
|
||||
@@ -162,6 +178,10 @@ def match_template_multiscale(model, layout_image, template_image, transform, ma
|
||||
# 4. 如果在所有尺度中找到了最佳匹配,则记录并屏蔽
|
||||
if best_match_info['inliers'] > min_inliers:
|
||||
print(f"找到一个匹配实例!内点数: {best_match_info['inliers']}, 使用的模板尺度: {best_match_info['scale']:.2f}x")
|
||||
if log_writer:
|
||||
instance_index = len(found_instances)
|
||||
log_writer.add_scalar("match/instance_inliers", int(best_match_info['inliers']), log_step + instance_index)
|
||||
log_writer.add_scalar("match/instance_scale", float(best_match_info['scale']), log_step + instance_index)
|
||||
|
||||
inlier_mask = best_match_info['mask'].ravel().astype(bool)
|
||||
inlier_layout_kps = best_match_info['dst_pts'][inlier_mask]
|
||||
@@ -183,6 +203,9 @@ def match_template_multiscale(model, layout_image, template_image, transform, ma
|
||||
print("在所有尺度下均未找到新的匹配实例,搜索结束。")
|
||||
break
|
||||
|
||||
if log_writer:
|
||||
log_writer.add_scalar("match/instances_found", len(found_instances), log_step)
|
||||
|
||||
return found_instances
|
||||
|
||||
|
||||
@@ -200,6 +223,10 @@ if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="使用 RoRD 进行多尺度模板匹配")
|
||||
parser.add_argument('--config', type=str, default="configs/base_config.yaml", help="YAML 配置文件路径")
|
||||
parser.add_argument('--model_path', type=str, default=None, help="模型权重路径,若未提供则使用配置文件中的路径")
|
||||
parser.add_argument('--log_dir', type=str, default=None, help="TensorBoard 日志根目录,覆盖配置文件设置")
|
||||
parser.add_argument('--experiment_name', type=str, default=None, help="TensorBoard 实验名称,覆盖配置文件设置")
|
||||
parser.add_argument('--tb_log_matches', action='store_true', help="启用模板匹配过程的 TensorBoard 记录")
|
||||
parser.add_argument('--disable_tensorboard', action='store_true', help="禁用 TensorBoard 记录")
|
||||
parser.add_argument('--layout', type=str, required=True)
|
||||
parser.add_argument('--template', type=str, required=True)
|
||||
parser.add_argument('--output', type=str)
|
||||
@@ -208,8 +235,33 @@ if __name__ == "__main__":
|
||||
cfg = load_config(args.config)
|
||||
config_dir = Path(args.config).resolve().parent
|
||||
matching_cfg = cfg.matching
|
||||
logging_cfg = cfg.get("logging", None)
|
||||
model_path = args.model_path or str(to_absolute_path(cfg.paths.model_path, config_dir))
|
||||
|
||||
use_tensorboard = False
|
||||
log_dir = None
|
||||
experiment_name = None
|
||||
if logging_cfg is not None:
|
||||
use_tensorboard = bool(logging_cfg.get("use_tensorboard", False))
|
||||
log_dir = logging_cfg.get("log_dir", "runs")
|
||||
experiment_name = logging_cfg.get("experiment_name", "default")
|
||||
|
||||
if args.disable_tensorboard:
|
||||
use_tensorboard = False
|
||||
if args.log_dir is not None:
|
||||
log_dir = args.log_dir
|
||||
if args.experiment_name is not None:
|
||||
experiment_name = args.experiment_name
|
||||
|
||||
should_log_matches = args.tb_log_matches and use_tensorboard and log_dir is not None
|
||||
writer = None
|
||||
if should_log_matches:
|
||||
log_root = Path(log_dir).expanduser()
|
||||
exp_folder = experiment_name or "default"
|
||||
tb_path = log_root / "match" / exp_folder
|
||||
tb_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
writer = SummaryWriter(tb_path.as_posix())
|
||||
|
||||
transform = get_transform()
|
||||
model = RoRD().cuda()
|
||||
model.load_state_dict(torch.load(model_path))
|
||||
@@ -218,7 +270,15 @@ if __name__ == "__main__":
|
||||
layout_image = Image.open(args.layout).convert('L')
|
||||
template_image = Image.open(args.template).convert('L')
|
||||
|
||||
detected_bboxes = match_template_multiscale(model, layout_image, template_image, transform, matching_cfg)
|
||||
detected_bboxes = match_template_multiscale(
|
||||
model,
|
||||
layout_image,
|
||||
template_image,
|
||||
transform,
|
||||
matching_cfg,
|
||||
log_writer=writer,
|
||||
log_step=0,
|
||||
)
|
||||
|
||||
print("\n检测到的边界框:")
|
||||
for bbox in detected_bboxes:
|
||||
@@ -226,3 +286,8 @@ if __name__ == "__main__":
|
||||
|
||||
if args.output:
|
||||
visualize_matches(args.layout, detected_bboxes, args.output)
|
||||
|
||||
if writer:
|
||||
writer.add_scalar("match/output_instances", len(detected_bboxes), 0)
|
||||
writer.add_text("match/layout_path", args.layout, 0)
|
||||
writer.close()
|
||||
@@ -15,6 +15,8 @@ dependencies = [
|
||||
"torch>=2.7.1",
|
||||
"torchvision>=0.22.1",
|
||||
"omegaconf>=2.3.0",
|
||||
"tensorboard>=2.16.2",
|
||||
"tensorboardx>=2.6.2",
|
||||
]
|
||||
|
||||
[[tool.uv.index]]
|
||||
|
||||
56
train.py
56
train.py
@@ -8,6 +8,7 @@ from pathlib import Path
|
||||
|
||||
import torch
|
||||
from torch.utils.data import DataLoader
|
||||
from torch.utils.tensorboard import SummaryWriter
|
||||
|
||||
from data.ic_dataset import ICLayoutTrainingDataset
|
||||
from losses import compute_detection_loss, compute_description_loss
|
||||
@@ -45,12 +46,39 @@ def main(args):
|
||||
patch_size = int(cfg.training.patch_size)
|
||||
scale_range = tuple(float(x) for x in cfg.training.scale_jitter_range)
|
||||
|
||||
logging_cfg = cfg.get("logging", None)
|
||||
use_tensorboard = False
|
||||
log_dir = None
|
||||
experiment_name = None
|
||||
|
||||
if logging_cfg is not None:
|
||||
use_tensorboard = bool(logging_cfg.get("use_tensorboard", False))
|
||||
log_dir = logging_cfg.get("log_dir", "runs")
|
||||
experiment_name = logging_cfg.get("experiment_name", "default")
|
||||
|
||||
if args.disable_tensorboard:
|
||||
use_tensorboard = False
|
||||
if args.log_dir is not None:
|
||||
log_dir = args.log_dir
|
||||
if args.experiment_name is not None:
|
||||
experiment_name = args.experiment_name
|
||||
|
||||
writer = None
|
||||
if use_tensorboard and log_dir:
|
||||
log_root = Path(log_dir).expanduser()
|
||||
experiment_folder = experiment_name or "default"
|
||||
tb_path = log_root / "train" / experiment_folder
|
||||
tb_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
writer = SummaryWriter(tb_path.as_posix())
|
||||
|
||||
logger = setup_logging(save_dir)
|
||||
|
||||
logger.info("--- 开始训练 RoRD 模型 ---")
|
||||
logger.info(f"训练参数: Epochs={epochs}, Batch Size={batch_size}, LR={lr}")
|
||||
logger.info(f"数据目录: {data_dir}")
|
||||
logger.info(f"保存目录: {save_dir}")
|
||||
if writer:
|
||||
logger.info(f"TensorBoard 日志目录: {tb_path}")
|
||||
|
||||
transform = get_transform()
|
||||
|
||||
@@ -69,6 +97,8 @@ def main(args):
|
||||
train_dataset, val_dataset = torch.utils.data.random_split(dataset, [train_size, val_size])
|
||||
|
||||
logger.info(f"训练集大小: {len(train_dataset)}, 验证集大小: {len(val_dataset)}")
|
||||
if writer:
|
||||
writer.add_text("dataset/info", f"train={len(train_dataset)}, val={len(val_dataset)}")
|
||||
|
||||
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4)
|
||||
val_dataloader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False, num_workers=4)
|
||||
@@ -116,6 +146,14 @@ def main(args):
|
||||
total_det_loss += det_loss.item()
|
||||
total_desc_loss += desc_loss.item()
|
||||
|
||||
if writer:
|
||||
num_batches = len(train_dataloader) if len(train_dataloader) > 0 else 1
|
||||
global_step = epoch * num_batches + i
|
||||
writer.add_scalar("train/loss_total", loss.item(), global_step)
|
||||
writer.add_scalar("train/loss_det", det_loss.item(), global_step)
|
||||
writer.add_scalar("train/loss_desc", desc_loss.item(), global_step)
|
||||
writer.add_scalar("train/lr", optimizer.param_groups[0]['lr'], global_step)
|
||||
|
||||
if i % 10 == 0:
|
||||
logger.info(f"Epoch {epoch+1}, Batch {i}, Total Loss: {loss.item():.4f}, "
|
||||
f"Det Loss: {det_loss.item():.4f}, Desc Loss: {desc_loss.item():.4f}")
|
||||
@@ -123,6 +161,10 @@ def main(args):
|
||||
avg_train_loss = total_train_loss / len(train_dataloader)
|
||||
avg_det_loss = total_det_loss / len(train_dataloader)
|
||||
avg_desc_loss = total_desc_loss / len(train_dataloader)
|
||||
if writer:
|
||||
writer.add_scalar("epoch/train_loss_total", avg_train_loss, epoch)
|
||||
writer.add_scalar("epoch/train_loss_det", avg_det_loss, epoch)
|
||||
writer.add_scalar("epoch/train_loss_desc", avg_desc_loss, epoch)
|
||||
|
||||
# 验证阶段
|
||||
model.eval()
|
||||
@@ -156,6 +198,11 @@ def main(args):
|
||||
logger.info(f"训练 - Total: {avg_train_loss:.4f}, Det: {avg_det_loss:.4f}, Desc: {avg_desc_loss:.4f}")
|
||||
logger.info(f"验证 - Total: {avg_val_loss:.4f}, Det: {avg_val_det_loss:.4f}, Desc: {avg_val_desc_loss:.4f}")
|
||||
logger.info(f"学习率: {optimizer.param_groups[0]['lr']:.2e}")
|
||||
if writer:
|
||||
writer.add_scalar("epoch/val_loss_total", avg_val_loss, epoch)
|
||||
writer.add_scalar("epoch/val_loss_det", avg_val_det_loss, epoch)
|
||||
writer.add_scalar("epoch/val_loss_desc", avg_val_desc_loss, epoch)
|
||||
writer.add_scalar("epoch/lr", optimizer.param_groups[0]['lr'], epoch)
|
||||
|
||||
# 早停检查
|
||||
if avg_val_loss < best_val_loss:
|
||||
@@ -179,6 +226,8 @@ def main(args):
|
||||
}
|
||||
}, save_path)
|
||||
logger.info(f"最佳模型已保存至: {save_path}")
|
||||
if writer:
|
||||
writer.add_scalar("checkpoint/best_val_loss", best_val_loss, epoch)
|
||||
else:
|
||||
patience_counter += 1
|
||||
if patience_counter >= patience:
|
||||
@@ -202,6 +251,10 @@ def main(args):
|
||||
logger.info(f"最终模型已保存至: {save_path}")
|
||||
logger.info("训练完成!")
|
||||
|
||||
if writer:
|
||||
writer.add_scalar("final/val_loss", avg_val_loss, epochs - 1)
|
||||
writer.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="训练 RoRD 模型")
|
||||
parser.add_argument('--config', type=str, default="configs/base_config.yaml", help="YAML 配置文件路径")
|
||||
@@ -210,4 +263,7 @@ if __name__ == "__main__":
|
||||
parser.add_argument('--epochs', type=int, default=None, help="训练轮数,若未提供则使用配置文件中的值")
|
||||
parser.add_argument('--batch_size', type=int, default=None, help="批次大小,若未提供则使用配置文件中的值")
|
||||
parser.add_argument('--lr', type=float, default=None, help="学习率,若未提供则使用配置文件中的值")
|
||||
parser.add_argument('--log_dir', type=str, default=None, help="TensorBoard 日志根目录,覆盖配置文件中的设置")
|
||||
parser.add_argument('--experiment_name', type=str, default=None, help="TensorBoard 实验名称,覆盖配置文件中的设置")
|
||||
parser.add_argument('--disable_tensorboard', action='store_true', help="禁用 TensorBoard 日志记录")
|
||||
main(parser.parse_args())
|
||||
132
uv.lock
generated
132
uv.lock
generated
@@ -7,6 +7,15 @@ resolution-markers = [
|
||||
"(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "absl-py"
|
||||
version = "2.3.1"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/10/2a/c93173ffa1b39c1d0395b7e842bbdc62e556ca9d8d3b5572926f3e4ca752/absl_py-2.3.1.tar.gz", hash = "sha256:a97820526f7fbfd2ec1bce83f3f25e3a14840dac0d8e02a0b71cd75db3f77fc9", size = 116588, upload-time = "2025-07-03T09:31:44.05Z" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/8f/aa/ba0014cc4659328dc818a28827be78e6d97312ab0cb98105a770924dc11e/absl_py-2.3.1-py3-none-any.whl", hash = "sha256:eeecf07f0c2a93ace0772c92e596ace6d3d3996c042b2128459aaae2a76de11d", size = 135811, upload-time = "2025-07-03T09:31:42.253Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "antlr4-python3-runtime"
|
||||
version = "4.9.3"
|
||||
@@ -150,6 +159,37 @@ wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/30/bc/70950ace7b5db1cbe5bdf2f9238cab885f51ad51d6befac2d0fb11d94fb9/gdstk-0.9.60-cp313-cp313-win_amd64.whl", hash = "sha256:9e41b38a719991b1e36ea0320c44688ac18e64ae53d9e5775fc7222fccfbb34a", size = 499802, upload-time = "2025-04-15T12:37:59.404Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "grpcio"
|
||||
version = "1.75.0"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/91/88/fe2844eefd3d2188bc0d7a2768c6375b46dfd96469ea52d8aeee8587d7e0/grpcio-1.75.0.tar.gz", hash = "sha256:b989e8b09489478c2d19fecc744a298930f40d8b27c3638afbfe84d22f36ce4e", size = 12722485, upload-time = "2025-09-16T09:20:21.731Z" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/0d/93/a1b29c2452d15cecc4a39700fbf54721a3341f2ddbd1bd883f8ec0004e6e/grpcio-1.75.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:fa35ccd9501ffdd82b861809cbfc4b5b13f4b4c5dc3434d2d9170b9ed38a9054", size = 5661861, upload-time = "2025-09-16T09:18:58.748Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b8/ce/7280df197e602d14594e61d1e60e89dfa734bb59a884ba86cdd39686aadb/grpcio-1.75.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:0fcb77f2d718c1e58cc04ef6d3b51e0fa3b26cf926446e86c7eba105727b6cd4", size = 11459982, upload-time = "2025-09-16T09:19:01.211Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7c/9b/37e61349771f89b543a0a0bbc960741115ea8656a2414bfb24c4de6f3dd7/grpcio-1.75.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:36764a4ad9dc1eb891042fab51e8cdf7cc014ad82cee807c10796fb708455041", size = 6239680, upload-time = "2025-09-16T09:19:04.443Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a6/66/f645d9d5b22ca307f76e71abc83ab0e574b5dfef3ebde4ec8b865dd7e93e/grpcio-1.75.0-cp312-cp312-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:725e67c010f63ef17fc052b261004942763c0b18dcd84841e6578ddacf1f9d10", size = 6908511, upload-time = "2025-09-16T09:19:07.884Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e6/9a/34b11cd62d03c01b99068e257595804c695c3c119596c7077f4923295e19/grpcio-1.75.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:91fbfc43f605c5ee015c9056d580a70dd35df78a7bad97e05426795ceacdb59f", size = 6429105, upload-time = "2025-09-16T09:19:10.085Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/1a/46/76eaceaad1f42c1e7e6a5b49a61aac40fc5c9bee4b14a1630f056ac3a57e/grpcio-1.75.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:7a9337ac4ce61c388e02019d27fa837496c4b7837cbbcec71b05934337e51531", size = 7060578, upload-time = "2025-09-16T09:19:12.283Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3d/82/181a0e3f1397b6d43239e95becbeb448563f236c0db11ce990f073b08d01/grpcio-1.75.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:ee16e232e3d0974750ab5f4da0ab92b59d6473872690b5e40dcec9a22927f22e", size = 8003283, upload-time = "2025-09-16T09:19:15.601Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/de/09/a335bca211f37a3239be4b485e3c12bf3da68d18b1f723affdff2b9e9680/grpcio-1.75.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:55dfb9122973cc69520b23d39867726722cafb32e541435707dc10249a1bdbc6", size = 7460319, upload-time = "2025-09-16T09:19:18.409Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/aa/59/6330105cdd6bc4405e74c96838cd7e148c3653ae3996e540be6118220c79/grpcio-1.75.0-cp312-cp312-win32.whl", hash = "sha256:fb64dd62face3d687a7b56cd881e2ea39417af80f75e8b36f0f81dfd93071651", size = 3934011, upload-time = "2025-09-16T09:19:21.013Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ff/14/e1309a570b7ebdd1c8ca24c4df6b8d6690009fa8e0d997cb2c026ce850c9/grpcio-1.75.0-cp312-cp312-win_amd64.whl", hash = "sha256:6b365f37a9c9543a9e91c6b4103d68d38d5bcb9965b11d5092b3c157bd6a5ee7", size = 4637934, upload-time = "2025-09-16T09:19:23.19Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/00/64/dbce0ffb6edaca2b292d90999dd32a3bd6bc24b5b77618ca28440525634d/grpcio-1.75.0-cp313-cp313-linux_armv7l.whl", hash = "sha256:1bb78d052948d8272c820bb928753f16a614bb2c42fbf56ad56636991b427518", size = 5666860, upload-time = "2025-09-16T09:19:25.417Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f3/e6/da02c8fa882ad3a7f868d380bb3da2c24d35dd983dd12afdc6975907a352/grpcio-1.75.0-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:9dc4a02796394dd04de0b9673cb79a78901b90bb16bf99ed8cb528c61ed9372e", size = 11455148, upload-time = "2025-09-16T09:19:28.615Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/ba/a0/84f87f6c2cf2a533cfce43b2b620eb53a51428ec0c8fe63e5dd21d167a70/grpcio-1.75.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:437eeb16091d31498585d73b133b825dc80a8db43311e332c08facf820d36894", size = 6243865, upload-time = "2025-09-16T09:19:31.342Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/be/12/53da07aa701a4839dd70d16e61ce21ecfcc9e929058acb2f56e9b2dd8165/grpcio-1.75.0-cp313-cp313-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:c2c39984e846bd5da45c5f7bcea8fafbe47c98e1ff2b6f40e57921b0c23a52d0", size = 6915102, upload-time = "2025-09-16T09:19:33.658Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/5b/c0/7eaceafd31f52ec4bf128bbcf36993b4bc71f64480f3687992ddd1a6e315/grpcio-1.75.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:38d665f44b980acdbb2f0e1abf67605ba1899f4d2443908df9ec8a6f26d2ed88", size = 6432042, upload-time = "2025-09-16T09:19:36.583Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/6b/12/a2ce89a9f4fc52a16ed92951f1b05f53c17c4028b3db6a4db7f08332bee8/grpcio-1.75.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:2e8e752ab5cc0a9c5b949808c000ca7586223be4f877b729f034b912364c3964", size = 7062984, upload-time = "2025-09-16T09:19:39.163Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/55/a6/2642a9b491e24482d5685c0f45c658c495a5499b43394846677abed2c966/grpcio-1.75.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:3a6788b30aa8e6f207c417874effe3f79c2aa154e91e78e477c4825e8b431ce0", size = 8001212, upload-time = "2025-09-16T09:19:41.726Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/19/20/530d4428750e9ed6ad4254f652b869a20a40a276c1f6817b8c12d561f5ef/grpcio-1.75.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ffc33e67cab6141c54e75d85acd5dec616c5095a957ff997b4330a6395aa9b51", size = 7457207, upload-time = "2025-09-16T09:19:44.368Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e2/6f/843670007e0790af332a21468d10059ea9fdf97557485ae633b88bd70efc/grpcio-1.75.0-cp313-cp313-win32.whl", hash = "sha256:c8cfc780b7a15e06253aae5f228e1e84c0d3c4daa90faf5bc26b751174da4bf9", size = 3934235, upload-time = "2025-09-16T09:19:46.815Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/4b/92/c846b01b38fdf9e2646a682b12e30a70dc7c87dfe68bd5e009ee1501c14b/grpcio-1.75.0-cp313-cp313-win_amd64.whl", hash = "sha256:0c91d5b16eff3cbbe76b7a1eaaf3d91e7a954501e9d4f915554f87c470475c3d", size = 4637558, upload-time = "2025-09-16T09:19:49.698Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jinja2"
|
||||
version = "3.1.6"
|
||||
@@ -184,6 +224,15 @@ wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e0/0e/89dd819f642d2a0d306905dac27a7d82ba75d2e887753c5a432ad7cbd5c4/klayout-0.30.2-cp313-cp313-win_amd64.whl", hash = "sha256:c27601cfd8f39ff55f63b795abc9c043ec46319127c86084b12b7c5b187135f6", size = 13190664, upload-time = "2025-05-29T22:51:23.621Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "markdown"
|
||||
version = "3.9"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/8d/37/02347f6d6d8279247a5837082ebc26fc0d5aaeaf75aa013fcbb433c777ab/markdown-3.9.tar.gz", hash = "sha256:d2900fe1782bd33bdbbd56859defef70c2e78fc46668f8eb9df3128138f2cb6a", size = 364585, upload-time = "2025-09-04T20:25:22.885Z" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/70/ae/44c4a6a4cbb496d93c6257954260fe3a6e91b7bed2240e5dad2a717f5111/markdown-3.9-py3-none-any.whl", hash = "sha256:9f4d91ed810864ea88a6f32c07ba8bee1346c0cc1f6b1f9f6c822f2a9667d280", size = 107441, upload-time = "2025-09-04T20:25:21.784Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "markupsafe"
|
||||
version = "3.0.2"
|
||||
@@ -444,6 +493,15 @@ wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a4/7d/f1c30a92854540bf789e9cd5dde7ef49bbe63f855b85a2e6b3db8135c591/opencv_python-4.11.0.86-cp37-abi3-win_amd64.whl", hash = "sha256:085ad9b77c18853ea66283e98affefe2de8cc4c1f43eda4c100cf9b2721142ec", size = 39488044, upload-time = "2025-01-16T13:52:21.928Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "25.0"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727, upload-time = "2025-04-19T11:48:59.673Z" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pillow"
|
||||
version = "11.2.1"
|
||||
@@ -485,6 +543,20 @@ wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/67/32/32dc030cfa91ca0fc52baebbba2e009bb001122a1daa8b6a79ad830b38d3/pillow-11.2.1-cp313-cp313t-win_arm64.whl", hash = "sha256:225c832a13326e34f212d2072982bb1adb210e0cc0b153e688743018c94a2681", size = 2417234, upload-time = "2025-04-12T17:49:08.399Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "protobuf"
|
||||
version = "6.32.1"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/fa/a4/cc17347aa2897568beece2e674674359f911d6fe21b0b8d6268cd42727ac/protobuf-6.32.1.tar.gz", hash = "sha256:ee2469e4a021474ab9baafea6cd070e5bf27c7d29433504ddea1a4ee5850f68d", size = 440635, upload-time = "2025-09-11T21:38:42.935Z" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/c0/98/645183ea03ab3995d29086b8bf4f7562ebd3d10c9a4b14ee3f20d47cfe50/protobuf-6.32.1-cp310-abi3-win32.whl", hash = "sha256:a8a32a84bc9f2aad712041b8b366190f71dde248926da517bde9e832e4412085", size = 424411, upload-time = "2025-09-11T21:38:27.427Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/8c/f3/6f58f841f6ebafe076cebeae33fc336e900619d34b1c93e4b5c97a81fdfa/protobuf-6.32.1-cp310-abi3-win_amd64.whl", hash = "sha256:b00a7d8c25fa471f16bc8153d0e53d6c9e827f0953f3c09aaa4331c718cae5e1", size = 435738, upload-time = "2025-09-11T21:38:30.959Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/10/56/a8a3f4e7190837139e68c7002ec749190a163af3e330f65d90309145a210/protobuf-6.32.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:d8c7e6eb619ffdf105ee4ab76af5a68b60a9d0f66da3ea12d1640e6d8dab7281", size = 426454, upload-time = "2025-09-11T21:38:34.076Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/3f/be/8dd0a927c559b37d7a6c8ab79034fd167dcc1f851595f2e641ad62be8643/protobuf-6.32.1-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:2f5b80a49e1eb7b86d85fcd23fe92df154b9730a725c3b38c4e43b9d77018bf4", size = 322874, upload-time = "2025-09-11T21:38:35.509Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/5c/f6/88d77011b605ef979aace37b7703e4eefad066f7e84d935e5a696515c2dd/protobuf-6.32.1-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:b1864818300c297265c83a4982fd3169f97122c299f56a56e2445c3698d34710", size = 322013, upload-time = "2025-09-11T21:38:37.017Z" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/97/b7/15cc7d93443d6c6a84626ae3258a91f4c6ac8c0edd5df35ea7658f71b79c/protobuf-6.32.1-py3-none-any.whl", hash = "sha256:2601b779fc7d32a866c6b4404f9d42a3f67c5b9f3f15b4db3cccabe06b95c346", size = 169289, upload-time = "2025-09-11T21:38:41.234Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pycparser"
|
||||
version = "2.22"
|
||||
@@ -533,6 +605,8 @@ dependencies = [
|
||||
{ name = "omegaconf" },
|
||||
{ name = "opencv-python" },
|
||||
{ name = "pillow" },
|
||||
{ name = "tensorboard" },
|
||||
{ name = "tensorboardx" },
|
||||
{ name = "torch" },
|
||||
{ name = "torchvision" },
|
||||
]
|
||||
@@ -547,6 +621,8 @@ requires-dist = [
|
||||
{ name = "omegaconf", specifier = ">=2.3.0" },
|
||||
{ name = "opencv-python", specifier = ">=4.11.0.86" },
|
||||
{ name = "pillow", specifier = ">=11.2.1" },
|
||||
{ name = "tensorboard", specifier = ">=2.16.2" },
|
||||
{ name = "tensorboardx", specifier = ">=2.6.2" },
|
||||
{ name = "torch", specifier = ">=2.7.1" },
|
||||
{ name = "torchvision", specifier = ">=0.22.1" },
|
||||
]
|
||||
@@ -572,6 +648,50 @@ wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/a2/09/77d55d46fd61b4a135c444fc97158ef34a095e5681d0a6c10b75bf356191/sympy-1.14.0-py3-none-any.whl", hash = "sha256:e091cc3e99d2141a0ba2847328f5479b05d94a6635cb96148ccb3f34671bd8f5", size = 6299353, upload-time = "2025-04-27T18:04:59.103Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tensorboard"
|
||||
version = "2.20.0"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
dependencies = [
|
||||
{ name = "absl-py" },
|
||||
{ name = "grpcio" },
|
||||
{ name = "markdown" },
|
||||
{ name = "numpy" },
|
||||
{ name = "packaging" },
|
||||
{ name = "pillow" },
|
||||
{ name = "protobuf" },
|
||||
{ name = "setuptools" },
|
||||
{ name = "tensorboard-data-server" },
|
||||
{ name = "werkzeug" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/9c/d9/a5db55f88f258ac669a92858b70a714bbbd5acd993820b41ec4a96a4d77f/tensorboard-2.20.0-py3-none-any.whl", hash = "sha256:9dc9f978cb84c0723acf9a345d96c184f0293d18f166bb8d59ee098e6cfaaba6", size = 5525680, upload-time = "2025-07-17T19:20:49.638Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tensorboard-data-server"
|
||||
version = "0.7.2"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/7a/13/e503968fefabd4c6b2650af21e110aa8466fe21432cd7c43a84577a89438/tensorboard_data_server-0.7.2-py3-none-any.whl", hash = "sha256:7e0610d205889588983836ec05dc098e80f97b7e7bbff7e994ebb78f578d0ddb" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/b7/85/dabeaf902892922777492e1d253bb7e1264cadce3cea932f7ff599e53fea/tensorboard_data_server-0.7.2-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:9fe5d24221b29625dbc7328b0436ca7fc1c23de4acf4d272f1180856e32f9f60" },
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/73/c6/825dab04195756cf8ff2e12698f22513b3db2f64925bdd41671bfb33aaa5/tensorboard_data_server-0.7.2-py3-none-manylinux_2_31_x86_64.whl", hash = "sha256:ef687163c24185ae9754ed5650eb5bc4d84ff257aabdc33f0cc6f74d8ba54530" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tensorboardx"
|
||||
version = "2.6.4"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
{ name = "packaging" },
|
||||
{ name = "protobuf" },
|
||||
]
|
||||
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/2b/c5/d4cc6e293fb837aaf9f76dd7745476aeba8ef7ef5146c3b3f9ee375fe7a5/tensorboardx-2.6.4.tar.gz", hash = "sha256:b163ccb7798b31100b9f5fa4d6bc22dad362d7065c2f24b51e50731adde86828", size = 4769801, upload-time = "2025-06-10T22:37:07.419Z" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/e0/1d/b5d63f1a6b824282b57f7b581810d20b7a28ca951f2d5b59f1eb0782c12b/tensorboardx-2.6.4-py3-none-any.whl", hash = "sha256:5970cf3a1f0a6a6e8b180ccf46f3fe832b8a25a70b86e5a237048a7c0beb18e2", size = 87201, upload-time = "2025-06-10T22:37:05.44Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tinycss2"
|
||||
version = "1.4.0"
|
||||
@@ -681,3 +801,15 @@ sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/0b/02/ae6ceac1baeda5
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/f4/24/2a3e3df732393fed8b3ebf2ec078f05546de641fe1b667ee316ec1dcf3b7/webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78", size = 11774, upload-time = "2017-04-05T20:21:32.581Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "werkzeug"
|
||||
version = "3.1.3"
|
||||
source = { registry = "https://pypi.tuna.tsinghua.edu.cn/simple" }
|
||||
dependencies = [
|
||||
{ name = "markupsafe" },
|
||||
]
|
||||
sdist = { url = "https://pypi.tuna.tsinghua.edu.cn/packages/9f/69/83029f1f6300c5fb2471d621ab06f6ec6b3324685a2ce0f9777fd4a8b71e/werkzeug-3.1.3.tar.gz", hash = "sha256:60723ce945c19328679790e3282cc758aa4a6040e4bb330f53d30fa546d44746", size = 806925, upload-time = "2024-11-08T15:52:18.093Z" }
|
||||
wheels = [
|
||||
{ url = "https://pypi.tuna.tsinghua.edu.cn/packages/52/24/ab44c871b0f07f491e5d2ad12c9bd7358e527510618cb1b803a88e986db1/werkzeug-3.1.3-py3-none-any.whl", hash = "sha256:54b78bf3716d19a65be4fceccc0d1d7b89e608834989dfae50ea87564639213e", size = 224498, upload-time = "2024-11-08T15:52:16.132Z" },
|
||||
]
|
||||
|
||||
Reference in New Issue
Block a user