initial commit

This commit is contained in:
Jiao77
2025-11-24 20:34:50 +08:00
commit 633749886e
15 changed files with 2665 additions and 0 deletions

75
README.md Normal file
View File

@@ -0,0 +1,75 @@
# IC Layout Skeleton Generation (prototype)
This repo provides a prototype pipeline to handle strictly-Manhattan binary IC layout images: extract skeletons, vectorize skeletons into Manhattan polylines, train a skeleton-generation model, and expand skeletons back to images.
## Project Structure
- `scripts/`: Core processing scripts.
- `skeleton_extract.py`: Binarize & extract skeleton PNG.
- `vectorize_skeleton.py`: Trace skeleton PNG into JSON polylines.
- `expand_skeleton.py`: Rasterize polyline JSON back to a PNG.
- `models/`: PyTorch models (e.g., `SkeletonVAE`).
- `train/`: Training scripts.
- `datasets/`: Dataset building helpers.
- `out/`: Output directory for generated data and logs (ignored by git).
## Setup
This project uses `uv` for dependency management.
1. **Install uv**:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
2. **Install dependencies**:
```bash
uv sync
```
## Usage
### 1. Prepare Data
Ensure your dataset is available. For example, link your data folder:
```bash
ln -s ~/Documents/data/ICCAD2019/img ./ICCAD2019/img
```
(Note: The workspace already contains a link to `ICCAD2019` if configured).
### 2. Run the Pipeline
You can run individual scripts using `uv run`.
**Extract Skeleton:**
```bash
uv run scripts/skeleton_extract.py path/to/image.png out/result_dir --denoise 3
```
**Vectorize Skeleton:**
```bash
uv run scripts/vectorize_skeleton.py out/result_dir/image_sk.png out/result_dir/image_vec.json
```
**Expand (Reconstruct) Image:**
```bash
uv run scripts/expand_skeleton.py out/result_dir/image_vec.json out/result_dir/image_recon.png
```
### 3. Build Dataset
Batch process a folder of images:
```bash
uv run datasets/build_skeleton_dataset.py ICCAD2019/img out/dataset_processed
```
### 4. Train Model
Train the VAE on the processed skeletons:
```bash
uv run train/train_skeleton_vae.py out/dataset_processed --epochs 20 --batch 16
```
## Debugging
A debug script is provided to run the full pipeline on a few random images from the dataset:
```bash
uv run debug_pipeline.py
```
Results will be in `out/debug`.