Compare commits
11 Commits
30e75dab08
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| bce950c381 | |||
| 758991d75d | |||
| 22b4117990 | |||
| 6a5c2d09aa | |||
| eba6e77ef2 | |||
| 951ad08db4 | |||
| 636c323d4b | |||
| 6c1ae59f2d | |||
| ed5c186f4d | |||
| 414c78a9c4 | |||
| fd3d242407 |
30
.env.example
30
.env.example
@@ -1,9 +1,23 @@
|
|||||||
# Input PCB file (Protel 2.8 ASCII). Override with CLI: python3 capacitors_by_net_pair.py <file>
|
# All scripts: set vars in .env or pass via CLI; CLI overrides .env.
|
||||||
INPUT_FILE=board.pcb
|
|
||||||
# Output JSON path. Override with: python3 capacitors_by_net_pair.py -o out.json
|
|
||||||
OUTPUT_FILE=output/capacitors_by_net_pair.json
|
|
||||||
|
|
||||||
# Compare locations: first and second Protel PCB file
|
# Capacitors by net pair (KiCad .kicad_pcb only)
|
||||||
FILE1=board_v1.pcb
|
INPUT_FILE=board.kicad_pcb
|
||||||
FILE2=board_v2.pcb
|
OUTPUT_FILE=outputs/capacitors_by_net_pair.json
|
||||||
COMPARE_OUTPUT=output/compare_locations.json
|
|
||||||
|
# Compare KiCad locations (.kicad_pcb)
|
||||||
|
FILE1=board_v1.kicad_pcb
|
||||||
|
FILE2=board_v2.kicad_pcb
|
||||||
|
COMPARE_OUTPUT=outputs/compare_locations.json
|
||||||
|
THRESHOLD=1.0
|
||||||
|
|
||||||
|
# Spreadsheet diff (designator column, data from row 10)
|
||||||
|
SHEET1=sheet1.xlsx
|
||||||
|
SHEET2=sheet2.xlsx
|
||||||
|
DIFF_OUTPUT=outputs/spreadsheet_diff.json
|
||||||
|
DESIGNATOR_COL=0
|
||||||
|
START_ROW=9
|
||||||
|
|
||||||
|
# Find bottom termination parts (search description column only; no package column)
|
||||||
|
SHEET=sheet.xlsx
|
||||||
|
BOTTOM_TERM_OUTPUT=outputs/bottom_termination_parts.json
|
||||||
|
DESCRIPTION_COL=1
|
||||||
|
|||||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,2 +1,3 @@
|
|||||||
*.json
|
*.json
|
||||||
|
.env
|
||||||
|
outputs/
|
||||||
|
|||||||
85
README.md
85
README.md
@@ -1,5 +1,9 @@
|
|||||||
# Altium Scripts
|
# Altium Scripts
|
||||||
|
|
||||||
|
**Convention:** All Python scripts use **.env** for input/output paths (and optional settings); you can override any value via **CLI**. All scripts write JSON output to the **`outputs/`** folder by default. Copy `.env.example` to `.env` and edit.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Capacitors by net pair
|
## Capacitors by net pair
|
||||||
|
|
||||||
**Script:** `CapacitorsByNetPair.pas`
|
**Script:** `CapacitorsByNetPair.pas`
|
||||||
@@ -55,35 +59,27 @@ Finds all **two-pad components** on the PCB that share the same two nets (e.g. d
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Protel PCB 2.8 ASCII — easier (Python, no Altium)
|
### KiCad .kicad_pcb (Python script)
|
||||||
|
|
||||||
**Yes — Protel PCB 2.8 ASCII is easier.** It’s plain text, so you can parse it with Python and no OLE/binary handling. You don’t need Altium running.
|
**Script:** `capacitors_by_net_pair.py` — **KiCad only.** Reads a `.kicad_pcb` file and outputs the same JSON (net pair → capacitors with designator, value, package, total capacitance).
|
||||||
|
|
||||||
1. **Export from Altium:** Open your PcbDoc → **File → Save As** (or **Export**) → choose **PCB 2.8 ASCII** or **Protel PCB ASCII** if your version offers it. Some versions use **File → Save Copy As** with format “PCB Binary/ASCII” or similar.
|
**Usage:**
|
||||||
2. **Run the Python script** on the exported `.pcb` / `.PcbDoc` (ASCII) file:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python3 capacitors_by_net_pair.py board.PcbDoc
|
|
||||||
python3 capacitors_by_net_pair.py board.PcbDoc -o out.json
|
|
||||||
```
|
|
||||||
|
|
||||||
**Input/output from .env:** Copy `.env.example` to `.env` and set `INPUT_FILE` and `OUTPUT_FILE`. The script reads these when the optional `python-dotenv` package is installed; CLI arguments override them. Without `.env`, you can still pass the input file and `-o` on the command line. By default the JSON is written to **`output/capacitors_by_net_pair.json`** (the `output/` directory is created if needed).
|
|
||||||
|
|
||||||
See **`capacitors_by_net_pair.py`** for the script. It parses COMP/PATTERN/VALUE and NET/PIN data from the ASCII file and produces the same JSON shape as the DelphiScript.
|
|
||||||
|
|
||||||
**Test file:** `tests/sample_protel_ascii.pcb` is a minimal Protel PCB 2.8 ASCII sample. Run:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 capacitors_by_net_pair.py tests/sample_protel_ascii.pcb -o tests/out.json
|
python3 capacitors_by_net_pair.py board.kicad_pcb
|
||||||
|
python3 capacitors_by_net_pair.py board.kicad_pcb -o outputs/capacitors_by_net_pair.json
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Input/output from .env:** Set `INPUT_FILE` and `OUTPUT_FILE`; CLI overrides. Default output: **`outputs/capacitors_by_net_pair.json`**.
|
||||||
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Compare component locations (two Protel files)
|
## Compare component locations (two KiCad files)
|
||||||
|
|
||||||
**Script:** `compare_protel_locations.py`
|
**Script:** `compare_protel_locations.py`
|
||||||
|
|
||||||
Loads two Protel PCB 2.8 ASCII files and reports **which components have moved** between them. Component position is the centroid of pin coordinates. Output is written to `output/compare_locations.json` by default.
|
Loads two KiCad `.kicad_pcb` files and reports **which components have moved** between them. Component position is the centroid of pad `(at x y)` coordinates. Output is written to `outputs/compare_locations.json` by default.
|
||||||
|
|
||||||
- **Moved:** designators with different (x, y) in file2, with old position, new position, and distance.
|
- **Moved:** designators with different (x, y) in file2, with old position, new position, and distance.
|
||||||
- **Only in file1 / only in file2:** components that appear in just one file.
|
- **Only in file1 / only in file2:** components that appear in just one file.
|
||||||
@@ -91,16 +87,61 @@ Loads two Protel PCB 2.8 ASCII files and reports **which components have moved**
|
|||||||
**Usage:**
|
**Usage:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 compare_protel_locations.py board_v1.pcb board_v2.pcb
|
python3 compare_protel_locations.py board_v1.kicad_pcb board_v2.kicad_pcb
|
||||||
python3 compare_protel_locations.py board_v1.pcb board_v2.pcb -o output/compare_locations.json
|
python3 compare_protel_locations.py board_v1.kicad_pcb board_v2.kicad_pcb -o outputs/compare_locations.json
|
||||||
```
|
```
|
||||||
|
|
||||||
Use **.env** (optional): set `FILE1`, `FILE2`, and `COMPARE_OUTPUT`; CLI arguments override them. Use `--threshold N` to set the minimum position change to count as moved (default 1.0).
|
Use **.env** (optional): set `FILE1`, `FILE2`, and `COMPARE_OUTPUT`; CLI arguments override them. Use `--threshold N` to set the minimum position change to count as moved (default 1.0).
|
||||||
|
|
||||||
**Test:** `tests/sample_protel_ascii.pcb` and `tests/sample_protel_ascii_rev2.pcb` (C1 and C2 moved in rev2):
|
---
|
||||||
|
|
||||||
|
## Spreadsheet diff by designator
|
||||||
|
|
||||||
|
**Script:** `diff_spreadsheets.py`
|
||||||
|
|
||||||
|
Compares two spreadsheets (`.xlsx` or `.csv`) on a designator column. Data is read **from row 10** by default (first 9 rows skipped). Outputs which designators are only in file1, only in file2, or in both.
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 compare_protel_locations.py tests/sample_protel_ascii.pcb tests/sample_protel_ascii_rev2.pcb
|
pip install pandas openpyxl
|
||||||
|
python3 diff_spreadsheets.py sheet1.xlsx sheet2.xlsx -o outputs/spreadsheet_diff.json
|
||||||
|
```
|
||||||
|
|
||||||
|
Options: `--designator-col 0` (0-based column index), `--start-row 9` (0-based; 9 = row 10). Env: `SHEET1`, `SHEET2`, `DIFF_OUTPUT`.
|
||||||
|
|
||||||
|
**Test:** `tests/sheet1.csv` and `tests/sheet2.csv` (designators from row 10):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 diff_spreadsheets.py tests/sheet1.csv tests/sheet2.csv --start-row 9
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Find bottom termination parts (QFN, DFN, BGA) by description
|
||||||
|
|
||||||
|
**Script:** `find_bottom_termination_parts.py`
|
||||||
|
|
||||||
|
Reads the same spreadsheet format (designator column, data from row 10) plus **description** and optionally **package** columns. Finds components whose description or package indicates **bottom termination**, including:
|
||||||
|
|
||||||
|
- **Package types:** QFN, DFN, BGA, LGA, SON, MLF, MLP, WDFN, WQFN, VQFN, etc.
|
||||||
|
- **Generic:** “bottom termination” (e.g. with 0201 or 0402)
|
||||||
|
|
||||||
|
Outputs matching designators, description, package, and the matched pattern to `outputs/bottom_termination_parts.json`.
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 1
|
||||||
|
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 3
|
||||||
|
```
|
||||||
|
|
||||||
|
Env: `SHEET`, `BOTTOM_TERM_OUTPUT`, `DESCRIPTION_COL` (default 1), `START_ROW` (default 9). No package column; only description is searched.
|
||||||
|
|
||||||
|
**Test:** `tests/sheet_with_descriptions.csv` (description col 3):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 find_bottom_termination_parts.py tests/sheet_with_descriptions.csv --description-col 3 --start-row 9
|
||||||
```
|
```
|
||||||
|
|
||||||
### Notes
|
### Notes
|
||||||
|
|||||||
166
bottom_termination.py
Normal file
166
bottom_termination.py
Normal file
@@ -0,0 +1,166 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
From a spreadsheet (designator + description column, data from row 10), find components
|
||||||
|
whose description indicates bottom termination package type (e.g. QFN, DFN, BGA).
|
||||||
|
Only the description column is searched (no separate package column).
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 1 [-o output.json]
|
||||||
|
python3 find_bottom_termination_parts.py # uses SHEET, DESCRIPTION_COL, BOTTOM_TERM_OUTPUT from .env
|
||||||
|
|
||||||
|
All paths and options: use .env or CLI; CLI overrides .env.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
try:
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
load_dotenv()
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
import pandas as pd
|
||||||
|
except ImportError:
|
||||||
|
print("Install pandas and openpyxl: pip install pandas openpyxl", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
# Bottom-termination package patterns (case-insensitive, word boundary where needed)
|
||||||
|
BOTTOM_TERMINATION_PATTERNS = [
|
||||||
|
r"\bqfn\b", # Quad Flat No-leads
|
||||||
|
r"\bdfn\b", # Dual Flat No-leads
|
||||||
|
r"\bbga\b", # Ball Grid Array
|
||||||
|
r"\blga\b", # Land Grid Array
|
||||||
|
r"\bson\b", # Small Outline No-lead
|
||||||
|
r"\bmlf\b", # Micro Leadframe
|
||||||
|
r"\bmlp\b",
|
||||||
|
r"\bwdfn\b",
|
||||||
|
r"\bwqfn\b",
|
||||||
|
r"\bvqfn\b",
|
||||||
|
r"\buqfn\b",
|
||||||
|
r"\bxqfn\b",
|
||||||
|
r"\b bottom\s+termination\b", # generic + 0201/0402
|
||||||
|
]
|
||||||
|
BOTTOM_TERM_REGEXES = [re.compile(p, re.I) for p in BOTTOM_TERMINATION_PATTERNS]
|
||||||
|
|
||||||
|
|
||||||
|
def load_sheet(
|
||||||
|
path: str,
|
||||||
|
designator_col: int = 0,
|
||||||
|
description_col: int = 1,
|
||||||
|
start_row: int = 9,
|
||||||
|
) -> list[dict]:
|
||||||
|
"""Load spreadsheet; return list of {designator, description} from start_row (0-based; 9 = row 10)."""
|
||||||
|
p = Path(path)
|
||||||
|
if not p.exists():
|
||||||
|
raise FileNotFoundError(path)
|
||||||
|
suffix = p.suffix.lower()
|
||||||
|
if suffix in (".xlsx", ".xls"):
|
||||||
|
df = pd.read_excel(path, header=None, engine="openpyxl" if suffix == ".xlsx" else None)
|
||||||
|
elif suffix == ".csv":
|
||||||
|
df = pd.read_csv(path, header=None)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unsupported format: {suffix}")
|
||||||
|
if max(designator_col, description_col) >= df.shape[1]:
|
||||||
|
raise ValueError(f"Sheet has {df.shape[1]} columns; need at least {max(designator_col, description_col) + 1}")
|
||||||
|
rows = []
|
||||||
|
for i in range(start_row, len(df)):
|
||||||
|
des = str(df.iloc[i, designator_col]).strip() if pd.notna(df.iloc[i, designator_col]) else ""
|
||||||
|
desc = str(df.iloc[i, description_col]).strip() if pd.notna(df.iloc[i, description_col]) else ""
|
||||||
|
if des or desc:
|
||||||
|
rows.append({"designator": des, "description": desc})
|
||||||
|
return rows
|
||||||
|
|
||||||
|
|
||||||
|
def is_bottom_termination_in_description(description: str) -> tuple[bool, str]:
|
||||||
|
"""
|
||||||
|
True if description (case-insensitive) contains a bottom termination package type
|
||||||
|
(e.g. QFN, DFN, BGA). Returns (matched, pattern_matched) e.g. (True, "qfn").
|
||||||
|
"""
|
||||||
|
if not (description or "").strip():
|
||||||
|
return False, ""
|
||||||
|
d = description.lower()
|
||||||
|
for pat in BOTTOM_TERM_REGEXES:
|
||||||
|
if pat.search(d):
|
||||||
|
name = pat.pattern.replace(r"\b", "").replace("\\s+", " ").strip()
|
||||||
|
return True, name
|
||||||
|
return False, ""
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
default_sheet = os.environ.get("SHEET", "").strip() or None
|
||||||
|
default_out = os.environ.get("BOTTOM_TERM_OUTPUT", "").strip() or "outputs/bottom_termination_parts.json"
|
||||||
|
default_des_col = os.environ.get("DESCRIPTION_COL", "").strip()
|
||||||
|
default_start = os.environ.get("START_ROW", "").strip()
|
||||||
|
try:
|
||||||
|
default_des_col = int(default_des_col) if default_des_col else 1
|
||||||
|
except ValueError:
|
||||||
|
default_des_col = 1
|
||||||
|
try:
|
||||||
|
default_start = int(default_start) if default_start else 9
|
||||||
|
except ValueError:
|
||||||
|
default_start = 9
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Find components with bottom termination package types (e.g. QFN, DFN, BGA); only description column is searched"
|
||||||
|
)
|
||||||
|
parser.add_argument("file", nargs="?", default=default_sheet, help="Spreadsheet path (default: SHEET from .env)")
|
||||||
|
parser.add_argument("-o", "--output", default=default_out, help="Output JSON (default: BOTTOM_TERM_OUTPUT from .env)")
|
||||||
|
parser.add_argument("--designator-col", type=int, default=0, help="Designator column 0-based (default 0)")
|
||||||
|
parser.add_argument("--description-col", type=int, default=default_des_col, metavar="COL", help="Description column 0-based (searched for package types; default: DESCRIPTION_COL from .env or 1)")
|
||||||
|
parser.add_argument("--start-row", type=int, default=default_start, help="First data row 0-based, 9=row 10 (default: START_ROW from .env or 9)")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
path = (args.file or default_sheet or "").strip()
|
||||||
|
if not path:
|
||||||
|
parser.error("No spreadsheet. Set SHEET in .env or pass file path.")
|
||||||
|
if not Path(path).exists():
|
||||||
|
print(f"Error: file not found: {path}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
try:
|
||||||
|
rows = load_sheet(
|
||||||
|
path,
|
||||||
|
args.designator_col,
|
||||||
|
args.description_col,
|
||||||
|
args.start_row,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
matches = []
|
||||||
|
for r in rows:
|
||||||
|
ok, pattern = is_bottom_termination_in_description(r["description"])
|
||||||
|
if ok:
|
||||||
|
matches.append({**r, "matched_pattern": pattern})
|
||||||
|
|
||||||
|
report = {
|
||||||
|
"file": path,
|
||||||
|
"designator_col": args.designator_col,
|
||||||
|
"description_col": args.description_col,
|
||||||
|
"start_row": args.start_row + 1,
|
||||||
|
"count": len(matches),
|
||||||
|
"parts": matches,
|
||||||
|
}
|
||||||
|
|
||||||
|
out_path = Path(args.output)
|
||||||
|
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
out_path.write_text(json.dumps(report, indent=2), encoding="utf-8")
|
||||||
|
print(f"Wrote {args.output}")
|
||||||
|
print(f"Found {len(matches)} components with bottom termination (by description column)")
|
||||||
|
for m in matches:
|
||||||
|
extra = f" [{m['matched_pattern']}]" if m.get("matched_pattern") else ""
|
||||||
|
desc = m["description"][:70] + "..." if len(m["description"]) > 70 else m["description"]
|
||||||
|
print(f" {m['designator']}: {desc}{extra}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
@@ -1,15 +1,13 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
"""
|
"""
|
||||||
Parse Protel PCB 2.8 ASCII (or Protel 99 SE PCB ASCII) and output capacitors
|
Parse KiCad .kicad_pcb and output capacitors by net pair: JSON with net pair
|
||||||
by net pair: JSON with net pair as key, designator/value/package and total
|
as key, designator/value/package and total capacitance per net pair.
|
||||||
capacitance per net pair.
|
|
||||||
|
|
||||||
Input/output paths are read from .env (INPUT_FILE, OUTPUT_FILE) if set;
|
All paths: use .env (INPUT_FILE, OUTPUT_FILE) or CLI; CLI overrides .env.
|
||||||
CLI arguments override them.
|
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
python capacitors_by_net_pair.py [file.pcb] [-o output.json]
|
python capacitors_by_net_pair.py [file.kicad_pcb] [-o output.json]
|
||||||
# or set in .env: INPUT_FILE=board.pcb, OUTPUT_FILE=out.json
|
# or set in .env: INPUT_FILE=board.kicad_pcb, OUTPUT_FILE=out.json
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
@@ -75,100 +73,87 @@ def format_capacitance(cap_f: float) -> str:
|
|||||||
return f"{cap_f}F"
|
return f"{cap_f}F"
|
||||||
|
|
||||||
|
|
||||||
# --- Protel PCB 2.8 ASCII parsing --------------------------------------------
|
# --- KiCad .kicad_pcb parsing -------------------------------------------------
|
||||||
|
|
||||||
def parse_protel_ascii(path: str) -> tuple[list[dict], dict]:
|
def _find_matching_paren(text: str, start: int) -> int:
|
||||||
|
"""Return index of closing paren for the ( at start."""
|
||||||
|
depth = 0
|
||||||
|
for i in range(start, len(text)):
|
||||||
|
if text[i] == "(":
|
||||||
|
depth += 1
|
||||||
|
elif text[i] == ")":
|
||||||
|
depth -= 1
|
||||||
|
if depth == 0:
|
||||||
|
return i
|
||||||
|
return -1
|
||||||
|
|
||||||
|
|
||||||
|
def parse_kicad_pcb(path: str) -> tuple[list[dict], dict]:
|
||||||
"""
|
"""
|
||||||
Parse file and return (components, pin_to_net).
|
Parse KiCad .kicad_pcb (s-expression) and return (components, pin_to_net).
|
||||||
components: list of {
|
components: designator, pattern, value, pins (with net names from pad net).
|
||||||
"designator": str,
|
pin_to_net is empty (unused for KiCad).
|
||||||
"pattern": str,
|
|
||||||
"value": str,
|
|
||||||
"pins": list of (pin_id, net_name or None),
|
|
||||||
}
|
|
||||||
pin_to_net: (designator, pin_id) -> net_name (from NET section if present).
|
|
||||||
"""
|
"""
|
||||||
text = Path(path).read_text(encoding="utf-8", errors="replace")
|
text = Path(path).read_text(encoding="utf-8", errors="replace")
|
||||||
lines = [line.rstrip() for line in text.splitlines()]
|
# Net list: (net N "Name")
|
||||||
|
net_by_num: dict[int, str] = {}
|
||||||
# NET section: map (Designator, PinNumber) -> NetName
|
for m in re.finditer(r'\(\s*net\s+(\d+)\s+"([^"]*)"\s*\)', text, re.I):
|
||||||
# Formats: "NET" "NetName" then "C1-1" / C1-1; or NetName then Designator-Pin lines
|
net_by_num[int(m.group(1))] = m.group(2)
|
||||||
pin_to_net: dict[tuple[str, str], str] = {}
|
# Footprint blocks: find each (footprint "Name" and its matching )
|
||||||
i = 0
|
|
||||||
while i < len(lines):
|
|
||||||
line = lines[i]
|
|
||||||
rest = line.strip()
|
|
||||||
# "NET" "NetName" or NET NetName
|
|
||||||
if re.match(r"^\s*NET\s+", line, re.I):
|
|
||||||
parts = re.split(r"\s+", rest, maxsplit=2)
|
|
||||||
current_net = (parts[1].strip('"') if len(parts) > 1 else "") or None
|
|
||||||
i += 1
|
|
||||||
while i < len(lines):
|
|
||||||
conn = lines[i].strip().strip('"')
|
|
||||||
if re.match(r"^[A-Za-z0-9_]+-\d+$", conn) and current_net:
|
|
||||||
comp, pin = conn.split("-", 1)
|
|
||||||
pin_to_net[(comp.upper(), pin)] = current_net
|
|
||||||
i += 1
|
|
||||||
elif conn.upper() in ("ENDNET", "END", ""):
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
i += 1
|
|
||||||
continue
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
# COMP ... ENDCOMP blocks
|
|
||||||
components: list[dict] = []
|
components: list[dict] = []
|
||||||
i = 0
|
fp_pos = 0
|
||||||
while i < len(lines):
|
while True:
|
||||||
line = lines[i]
|
fp_start = text.find("(footprint ", fp_pos)
|
||||||
if line.strip().upper() != "COMP":
|
if fp_start < 0:
|
||||||
i += 1
|
break
|
||||||
continue
|
quote = text.find('"', fp_start + 10)
|
||||||
i += 1
|
if quote < 0:
|
||||||
designator = ""
|
break
|
||||||
pattern = ""
|
quote_end = text.find('"', quote + 1)
|
||||||
value = ""
|
if quote_end < 0:
|
||||||
pins: list[tuple[str, str | None]] = [] # (pin_id, net_name)
|
break
|
||||||
|
lib_name = text[quote + 1 : quote_end]
|
||||||
while i < len(lines):
|
fp_end = _find_matching_paren(text, fp_start)
|
||||||
ln = lines[i]
|
if fp_end < 0:
|
||||||
if ln.strip().upper() == "ENDCOMP":
|
break
|
||||||
i += 1
|
rest = text[fp_start:fp_end + 1]
|
||||||
|
pattern = lib_name.split(":")[-1] if ":" in lib_name else lib_name
|
||||||
|
ref_m = re.search(r'\(\s*fp_text\s+reference\s+"([^"]+)"', rest)
|
||||||
|
value_m = re.search(r'\(\s*fp_text\s+value\s+"([^"]+)"', rest)
|
||||||
|
designator = ref_m.group(1) if ref_m else ""
|
||||||
|
value = value_m.group(1) if value_m else "?"
|
||||||
|
pins: list[tuple[str, str | None]] = []
|
||||||
|
# Find each (pad "N" ... and the (net num "Name") inside that pad's span
|
||||||
|
pos = 0
|
||||||
|
while True:
|
||||||
|
pad_start = rest.find("(pad ", pos)
|
||||||
|
if pad_start < 0:
|
||||||
break
|
break
|
||||||
# Designator: first token on first line after COMP that isn't PATTERN/VALUE/PIN/PINNE
|
quote = rest.find('"', pad_start + 5)
|
||||||
if not designator and ln.strip():
|
if quote < 0:
|
||||||
parts = ln.strip().split()
|
break
|
||||||
if parts:
|
quote_end = rest.find('"', quote + 1)
|
||||||
candidate = parts[0]
|
if quote_end < 0:
|
||||||
if not re.match(r"^(PATTERN|VALUE|PIN|PINNE|ENDCOMP)$", candidate, re.I):
|
break
|
||||||
designator = candidate
|
pad_num = rest[quote + 1 : quote_end]
|
||||||
# PATTERN = value or PATTERN value
|
depth = 0
|
||||||
m = re.match(r"^\s*PATTERN\s+(.+)$", ln, re.I)
|
end = pad_start + 4
|
||||||
if m:
|
for i, c in enumerate(rest[pad_start:], start=pad_start):
|
||||||
pattern = m.group(1).strip().strip('"')
|
if c == "(":
|
||||||
m = re.match(r"^\s*VALUE\s+(.+)$", ln, re.I)
|
depth += 1
|
||||||
if m:
|
elif c == ")":
|
||||||
value = m.group(1).strip().strip('"')
|
depth -= 1
|
||||||
# PIN: PIN <name> [net] ... or PINNE <name> <net> ... (numbers follow)
|
if depth == 0:
|
||||||
pin_match = re.match(r"^\s*(?:PINNE|PIN)\s+(\S+)\s+(\S+)", ln, re.I)
|
end = i
|
||||||
if pin_match:
|
break
|
||||||
pin_name, second = pin_match.groups()
|
block = rest[pad_start:end]
|
||||||
net_name = second if not second.replace(".", "").isdigit() else None
|
net_m = re.search(r'\(\s*net\s+(\d+)\s+"([^"]*)"\s*\)', block)
|
||||||
if net_name and net_name.upper() in ("PINNE", "PIN"):
|
if net_m:
|
||||||
net_name = None
|
net_name = net_m.group(2) or net_by_num.get(int(net_m.group(1)), "")
|
||||||
if not net_name and (designator, pin_name) in pin_to_net:
|
|
||||||
net_name = pin_to_net[(designator, pin_name)]
|
|
||||||
elif not net_name and (designator.upper(), pin_name) in pin_to_net:
|
|
||||||
net_name = pin_to_net[(designator.upper(), pin_name)]
|
|
||||||
pins.append((pin_name, net_name))
|
|
||||||
else:
|
else:
|
||||||
pin_simple = re.match(r"^\s*(?:PINNE|PIN)\s+(\S+)", ln, re.I)
|
net_name = net_by_num.get(0, "") if 0 in net_by_num else None
|
||||||
if pin_simple:
|
pins.append((pad_num, net_name or None))
|
||||||
pn = pin_simple.group(1)
|
pos = end + 1
|
||||||
net_name = pin_to_net.get((designator, pn)) or pin_to_net.get((designator.upper(), pn))
|
|
||||||
pins.append((pn, net_name))
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
if designator:
|
if designator:
|
||||||
components.append({
|
components.append({
|
||||||
"designator": designator,
|
"designator": designator,
|
||||||
@@ -176,7 +161,8 @@ def parse_protel_ascii(path: str) -> tuple[list[dict], dict]:
|
|||||||
"value": value or "?",
|
"value": value or "?",
|
||||||
"pins": pins,
|
"pins": pins,
|
||||||
})
|
})
|
||||||
return components, pin_to_net
|
fp_pos = fp_end + 1
|
||||||
|
return components, {}
|
||||||
|
|
||||||
|
|
||||||
def build_net_key(net1: str, net2: str) -> str:
|
def build_net_key(net1: str, net2: str) -> str:
|
||||||
@@ -188,19 +174,19 @@ def build_net_key(net1: str, net2: str) -> str:
|
|||||||
|
|
||||||
def main() -> int:
|
def main() -> int:
|
||||||
default_input = os.environ.get("INPUT_FILE", "").strip() or None
|
default_input = os.environ.get("INPUT_FILE", "").strip() or None
|
||||||
default_output = os.environ.get("OUTPUT_FILE", "").strip() or "output/capacitors_by_net_pair.json"
|
default_output = os.environ.get("OUTPUT_FILE", "").strip() or "outputs/capacitors_by_net_pair.json"
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description="List capacitors by net pair from Protel PCB 2.8 ASCII")
|
parser = argparse.ArgumentParser(description="List capacitors by net pair from KiCad .kicad_pcb")
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"file",
|
"file",
|
||||||
nargs="?",
|
nargs="?",
|
||||||
default=default_input,
|
default=default_input,
|
||||||
help="Path to .pcb / .PcbDoc (ASCII) file (default: INPUT_FILE from .env)",
|
help="Path to .kicad_pcb file (default: INPUT_FILE from .env)",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"-o", "--output",
|
"-o", "--output",
|
||||||
default=default_output,
|
default=default_output,
|
||||||
help="Output JSON path (default: OUTPUT_FILE from .env or output/capacitors_by_net_pair.json)",
|
help="Output JSON path (default: OUTPUT_FILE from .env or outputs/capacitors_by_net_pair.json)",
|
||||||
)
|
)
|
||||||
parser.add_argument("--all-two-pad", action="store_true", help="Include all 2-pad parts, not only C*")
|
parser.add_argument("--all-two-pad", action="store_true", help="Include all 2-pad parts, not only C*")
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
@@ -210,7 +196,7 @@ def main() -> int:
|
|||||||
parser.error("No input file. Set INPUT_FILE in .env or pass the file path as an argument.")
|
parser.error("No input file. Set INPUT_FILE in .env or pass the file path as an argument.")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
components, pin_to_net = parse_protel_ascii(input_path)
|
components, pin_to_net = parse_kicad_pcb(input_path)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Parse error: {e}", file=sys.stderr)
|
print(f"Parse error: {e}", file=sys.stderr)
|
||||||
return 1
|
return 1
|
||||||
122
diff_spreadsheets.py
Normal file
122
diff_spreadsheets.py
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Diff two spreadsheets by designator column. Data starts at row 10 by default.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 diff_spreadsheets.py file1.xlsx file2.xlsx [-o output.json]
|
||||||
|
python3 diff_spreadsheets.py # uses SHEET1, SHEET2, DIFF_OUTPUT from .env
|
||||||
|
|
||||||
|
All paths and options: use .env or CLI; CLI overrides .env.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
try:
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
load_dotenv()
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
import pandas as pd
|
||||||
|
except ImportError:
|
||||||
|
print("Install pandas and openpyxl: pip install pandas openpyxl", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
def read_designators(path: str, designator_col: int = 0, start_row: int = 9) -> set[str]:
|
||||||
|
"""Load spreadsheet and return set of designator values from the given column, starting at start_row (0-based; 9 = row 10)."""
|
||||||
|
p = Path(path)
|
||||||
|
if not p.exists():
|
||||||
|
raise FileNotFoundError(path)
|
||||||
|
suffix = p.suffix.lower()
|
||||||
|
if suffix in (".xlsx", ".xls"):
|
||||||
|
df = pd.read_excel(path, header=None, engine="openpyxl" if suffix == ".xlsx" else None)
|
||||||
|
elif suffix == ".csv":
|
||||||
|
df = pd.read_csv(path, header=None)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unsupported format: {suffix}")
|
||||||
|
if designator_col >= df.shape[1]:
|
||||||
|
raise ValueError(f"Column {designator_col} not in sheet (has {df.shape[1]} columns)")
|
||||||
|
# from start_row to end, take the designator column, drop NaN, strip strings
|
||||||
|
col = df.iloc[start_row:, designator_col]
|
||||||
|
values = set()
|
||||||
|
for v in col:
|
||||||
|
if pd.isna(v):
|
||||||
|
continue
|
||||||
|
s = str(v).strip()
|
||||||
|
if s:
|
||||||
|
values.add(s)
|
||||||
|
return values
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
default1 = os.environ.get("SHEET1", "").strip() or None
|
||||||
|
default2 = os.environ.get("SHEET2", "").strip() or None
|
||||||
|
default_out = os.environ.get("DIFF_OUTPUT", "").strip() or "outputs/spreadsheet_diff.json"
|
||||||
|
default_col = os.environ.get("DESIGNATOR_COL", "").strip()
|
||||||
|
default_start = os.environ.get("START_ROW", "").strip()
|
||||||
|
try:
|
||||||
|
default_col = int(default_col) if default_col else 0
|
||||||
|
except ValueError:
|
||||||
|
default_col = 0
|
||||||
|
try:
|
||||||
|
default_start = int(default_start) if default_start else 9
|
||||||
|
except ValueError:
|
||||||
|
default_start = 9
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description="Diff two spreadsheets by designator column")
|
||||||
|
parser.add_argument("file1", nargs="?", default=default1, help="First spreadsheet (default: SHEET1 from .env)")
|
||||||
|
parser.add_argument("file2", nargs="?", default=default2, help="Second spreadsheet (default: SHEET2 from .env)")
|
||||||
|
parser.add_argument("-o", "--output", default=default_out, help="Output JSON (default: DIFF_OUTPUT from .env)")
|
||||||
|
parser.add_argument("--designator-col", type=int, default=default_col, help="Designator column 0-based (default: DESIGNATOR_COL from .env or 0)")
|
||||||
|
parser.add_argument("--start-row", type=int, default=default_start, help="First data row 0-based, 9=row 10 (default: START_ROW from .env or 9)")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
path1 = (args.file1 or default1 or "").strip()
|
||||||
|
path2 = (args.file2 or default2 or "").strip()
|
||||||
|
if not path1 or not path2:
|
||||||
|
parser.error("Need two files. Set SHEET1 and SHEET2 in .env or pass two paths.")
|
||||||
|
|
||||||
|
try:
|
||||||
|
d1 = read_designators(path1, args.designator_col, args.start_row)
|
||||||
|
d2 = read_designators(path2, args.designator_col, args.start_row)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: {e}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
only1 = sorted(d1 - d2)
|
||||||
|
only2 = sorted(d2 - d1)
|
||||||
|
both = sorted(d1 & d2)
|
||||||
|
|
||||||
|
report = {
|
||||||
|
"file1": path1,
|
||||||
|
"file2": path2,
|
||||||
|
"designator_col": args.designator_col,
|
||||||
|
"start_row": args.start_row + 1,
|
||||||
|
"only_in_file1": only1,
|
||||||
|
"only_in_file2": only2,
|
||||||
|
"in_both": both,
|
||||||
|
"count_only_in_file1": len(only1),
|
||||||
|
"count_only_in_file2": len(only2),
|
||||||
|
"count_in_both": len(both),
|
||||||
|
}
|
||||||
|
|
||||||
|
out_path = Path(args.output)
|
||||||
|
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
out_path.write_text(json.dumps(report, indent=2), encoding="utf-8")
|
||||||
|
print(f"Wrote {args.output}")
|
||||||
|
print(f"Only in file1: {len(only1)} | Only in file2: {len(only2)} | In both: {len(both)}")
|
||||||
|
if only1:
|
||||||
|
print(" Only in file1:", ", ".join(only1[:20]) + (" ..." if len(only1) > 20 else ""))
|
||||||
|
if only2:
|
||||||
|
print(" Only in file2:", ", ".join(only2[:20]) + (" ..." if len(only2) > 20 else ""))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
@@ -1,12 +1,12 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
"""
|
"""
|
||||||
Load two Protel PCB 2.8 ASCII files and report which components have moved
|
Load two KiCad .kicad_pcb files and report which components have moved
|
||||||
between them. Component position is taken as the centroid of pin coordinates.
|
between them. Component position is the centroid of pad (at x y) coordinates.
|
||||||
|
|
||||||
Input/output paths can be set in .env (FILE1, FILE2, COMPARE_OUTPUT); CLI overrides.
|
All paths: use .env (FILE1, FILE2, COMPARE_OUTPUT) or CLI; CLI overrides .env.
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
python3 compare_protel_locations.py file1.pcb file2.pcb [-o report.json]
|
python3 compare_protel_locations.py file1.kicad_pcb file2.kicad_pcb [-o report.json]
|
||||||
python3 compare_protel_locations.py # uses FILE1, FILE2 from .env
|
python3 compare_protel_locations.py # uses FILE1, FILE2 from .env
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@@ -25,58 +25,66 @@ except ImportError:
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def _find_matching_paren(text: str, start: int) -> int:
|
||||||
|
"""Return index of closing paren for the ( at start."""
|
||||||
|
depth = 0
|
||||||
|
for i in range(start, len(text)):
|
||||||
|
if text[i] == "(":
|
||||||
|
depth += 1
|
||||||
|
elif text[i] == ")":
|
||||||
|
depth -= 1
|
||||||
|
if depth == 0:
|
||||||
|
return i
|
||||||
|
return -1
|
||||||
|
|
||||||
|
|
||||||
def parse_components_with_positions(path: str) -> dict[str, dict]:
|
def parse_components_with_positions(path: str) -> dict[str, dict]:
|
||||||
"""
|
"""
|
||||||
Parse a Protel PCB 2.8 ASCII file and return a dict:
|
Parse a KiCad .kicad_pcb file and return a dict:
|
||||||
designator -> { "x": float, "y": float, "pins": [(x,y), ...], "pattern": str, "value": str }
|
designator -> { "x": float, "y": float, "pins": [(x,y), ...], "pattern": str, "value": str }
|
||||||
Position is the centroid of all pin coordinates (from the numeric line after each PIN line).
|
Position is the centroid of all pad (at x y) coordinates.
|
||||||
"""
|
"""
|
||||||
text = Path(path).read_text(encoding="utf-8", errors="replace")
|
text = Path(path).read_text(encoding="utf-8", errors="replace")
|
||||||
lines = [line.rstrip() for line in text.splitlines()]
|
|
||||||
|
|
||||||
components: dict[str, dict] = {}
|
components: dict[str, dict] = {}
|
||||||
i = 0
|
fp_pos = 0
|
||||||
while i < len(lines):
|
while True:
|
||||||
if lines[i].strip().upper() != "COMP":
|
fp_start = text.find("(footprint ", fp_pos)
|
||||||
i += 1
|
if fp_start < 0:
|
||||||
continue
|
break
|
||||||
i += 1
|
quote = text.find('"', fp_start + 10)
|
||||||
designator = ""
|
if quote < 0:
|
||||||
pattern = ""
|
break
|
||||||
value = ""
|
quote_end = text.find('"', quote + 1)
|
||||||
|
if quote_end < 0:
|
||||||
|
break
|
||||||
|
lib_name = text[quote + 1 : quote_end]
|
||||||
|
fp_end = _find_matching_paren(text, fp_start)
|
||||||
|
if fp_end < 0:
|
||||||
|
break
|
||||||
|
rest = text[fp_start:fp_end + 1]
|
||||||
|
pattern = lib_name.split(":")[-1] if ":" in lib_name else lib_name
|
||||||
|
ref_m = re.search(r'\(\s*fp_text\s+reference\s+"([^"]+)"', rest)
|
||||||
|
value_m = re.search(r'\(\s*fp_text\s+value\s+"([^"]+)"', rest)
|
||||||
|
designator = ref_m.group(1) if ref_m else ""
|
||||||
|
value = value_m.group(1) if value_m else "?"
|
||||||
pin_coords: list[tuple[float, float]] = []
|
pin_coords: list[tuple[float, float]] = []
|
||||||
|
pos = 0
|
||||||
while i < len(lines):
|
while True:
|
||||||
ln = lines[i]
|
pad_start = rest.find("(pad ", pos)
|
||||||
if ln.strip().upper() == "ENDCOMP":
|
if pad_start < 0:
|
||||||
i += 1
|
|
||||||
break
|
break
|
||||||
if not designator and ln.strip():
|
pad_end = _find_matching_paren(rest, pad_start)
|
||||||
parts = ln.strip().split()
|
if pad_end < 0:
|
||||||
if parts and not re.match(
|
break
|
||||||
r"^(PATTERN|VALUE|PIN|PINNE|ENDCOMP)$", parts[0], re.I
|
block = rest[pad_start:pad_end + 1]
|
||||||
):
|
at_m = re.search(r'\(\s*at\s+([-\d.]+)\s+([-\d.]+)', block)
|
||||||
designator = parts[0]
|
if at_m:
|
||||||
m = re.match(r"^\s*PATTERN\s+(.+)$", ln, re.I)
|
try:
|
||||||
if m:
|
x, y = float(at_m.group(1)), float(at_m.group(2))
|
||||||
pattern = m.group(1).strip().strip('"')
|
pin_coords.append((x, y))
|
||||||
m = re.match(r"^\s*VALUE\s+(.+)$", ln, re.I)
|
except ValueError:
|
||||||
if m:
|
pass
|
||||||
value = m.group(1).strip().strip('"')
|
pos = pad_end + 1
|
||||||
if re.match(r"^\s*(?:PINNE|PIN)\s+", ln, re.I):
|
|
||||||
# Next line often has coordinates: first two numbers are X Y
|
|
||||||
if i + 1 < len(lines):
|
|
||||||
next_ln = lines[i + 1].strip().split()
|
|
||||||
nums = [t for t in next_ln if re.match(r"^-?\d+$", t)]
|
|
||||||
if len(nums) >= 2:
|
|
||||||
try:
|
|
||||||
x, y = int(nums[0]), int(nums[1])
|
|
||||||
pin_coords.append((float(x), float(y)))
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
i += 1
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
if designator and pin_coords:
|
if designator and pin_coords:
|
||||||
cx = sum(p[0] for p in pin_coords) / len(pin_coords)
|
cx = sum(p[0] for p in pin_coords) / len(pin_coords)
|
||||||
cy = sum(p[1] for p in pin_coords) / len(pin_coords)
|
cy = sum(p[1] for p in pin_coords) / len(pin_coords)
|
||||||
@@ -87,6 +95,7 @@ def parse_components_with_positions(path: str) -> dict[str, dict]:
|
|||||||
"pattern": pattern or "",
|
"pattern": pattern or "",
|
||||||
"value": value or "?",
|
"value": value or "?",
|
||||||
}
|
}
|
||||||
|
fp_pos = fp_end + 1
|
||||||
return components
|
return components
|
||||||
|
|
||||||
|
|
||||||
@@ -99,35 +108,40 @@ def main() -> int:
|
|||||||
default_file2 = os.environ.get("FILE2", "").strip() or None
|
default_file2 = os.environ.get("FILE2", "").strip() or None
|
||||||
default_output = (
|
default_output = (
|
||||||
os.environ.get("COMPARE_OUTPUT", "").strip()
|
os.environ.get("COMPARE_OUTPUT", "").strip()
|
||||||
or "output/compare_locations.json"
|
or "outputs/compare_locations.json"
|
||||||
)
|
)
|
||||||
|
default_threshold = os.environ.get("THRESHOLD", "").strip()
|
||||||
|
try:
|
||||||
|
default_threshold = float(default_threshold) if default_threshold else 1.0
|
||||||
|
except ValueError:
|
||||||
|
default_threshold = 1.0
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(
|
parser = argparse.ArgumentParser(
|
||||||
description="Compare two Protel PCB ASCII files and list components that moved"
|
description="Compare two KiCad .kicad_pcb files and list components that moved"
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"file1",
|
"file1",
|
||||||
nargs="?",
|
nargs="?",
|
||||||
default=default_file1,
|
default=default_file1,
|
||||||
help="First PCB file (default: FILE1 from .env)",
|
help="First .kicad_pcb file (default: FILE1 from .env)",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"file2",
|
"file2",
|
||||||
nargs="?",
|
nargs="?",
|
||||||
default=default_file2,
|
default=default_file2,
|
||||||
help="Second PCB file (default: FILE2 from .env)",
|
help="Second .kicad_pcb file (default: FILE2 from .env)",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"-o",
|
"-o",
|
||||||
"--output",
|
"--output",
|
||||||
default=default_output,
|
default=default_output,
|
||||||
help="Output JSON path (default: COMPARE_OUTPUT from .env or output/compare_locations.json)",
|
help="Output JSON path (default: COMPARE_OUTPUT from .env or outputs/compare_locations.json)",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--threshold",
|
"--threshold",
|
||||||
type=float,
|
type=float,
|
||||||
default=1.0,
|
default=default_threshold,
|
||||||
help="Minimum position change to count as moved (default: 1.0)",
|
help="Minimum position change to count as moved (default: THRESHOLD from .env or 1.0)",
|
||||||
)
|
)
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
@@ -135,7 +149,7 @@ def main() -> int:
|
|||||||
path2 = (args.file2 or default_file2 or "").strip()
|
path2 = (args.file2 or default_file2 or "").strip()
|
||||||
if not path1 or not path2:
|
if not path1 or not path2:
|
||||||
parser.error(
|
parser.error(
|
||||||
"Need two PCB files. Set FILE1 and FILE2 in .env or pass two paths."
|
"Need two .kicad_pcb files. Set FILE1 and FILE2 in .env or pass two paths."
|
||||||
)
|
)
|
||||||
|
|
||||||
if not Path(path1).exists():
|
if not Path(path1).exists():
|
||||||
@@ -1 +1,3 @@
|
|||||||
python-dotenv>=1.0.0
|
python-dotenv>=1.0.0
|
||||||
|
pandas>=2.0.0
|
||||||
|
openpyxl>=3.1.0
|
||||||
|
|||||||
13
tests/sheet1.csv
Normal file
13
tests/sheet1.csv
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
C1,10uF,0805
|
||||||
|
C2,22uF,0805
|
||||||
|
C3,1uF,0603
|
||||||
|
R1,10k,0805
|
||||||
|
13
tests/sheet2.csv
Normal file
13
tests/sheet2.csv
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
x
|
||||||
|
C1,10uF,0805
|
||||||
|
C3,1uF,0603
|
||||||
|
R1,10k,0805
|
||||||
|
C4,100nF,0603
|
||||||
|
16
tests/sheet_with_descriptions.csv
Normal file
16
tests/sheet_with_descriptions.csv
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
x,x,x,x
|
||||||
|
C1,10uF,0805,Capacitor MLCC bottom termination 0201 10uF 10V
|
||||||
|
C2,22uF,0805,Capacitor MLCC 22uF 16V
|
||||||
|
C3,1uF,0603,Capacitor MLCC bottom termination 0402 1uF 6.3V
|
||||||
|
R1,10k,0805,Resistor thick film 10k 1%
|
||||||
|
R2,100R,0201,Resistor bottom termination 0201 100ohm
|
||||||
|
U1,IC Regulator,QFN-16,3.3V LDO QFN-16 300mA
|
||||||
|
U2,MCU,DFN-8,ARM Cortex-M0+ DFN-8
|
||||||
|
Reference in New Issue
Block a user