Compare commits
11 Commits
30e75dab08
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| bce950c381 | |||
| 758991d75d | |||
| 22b4117990 | |||
| 6a5c2d09aa | |||
| eba6e77ef2 | |||
| 951ad08db4 | |||
| 636c323d4b | |||
| 6c1ae59f2d | |||
| ed5c186f4d | |||
| 414c78a9c4 | |||
| fd3d242407 |
30
.env.example
30
.env.example
@@ -1,9 +1,23 @@
|
||||
# Input PCB file (Protel 2.8 ASCII). Override with CLI: python3 capacitors_by_net_pair.py <file>
|
||||
INPUT_FILE=board.pcb
|
||||
# Output JSON path. Override with: python3 capacitors_by_net_pair.py -o out.json
|
||||
OUTPUT_FILE=output/capacitors_by_net_pair.json
|
||||
# All scripts: set vars in .env or pass via CLI; CLI overrides .env.
|
||||
|
||||
# Compare locations: first and second Protel PCB file
|
||||
FILE1=board_v1.pcb
|
||||
FILE2=board_v2.pcb
|
||||
COMPARE_OUTPUT=output/compare_locations.json
|
||||
# Capacitors by net pair (KiCad .kicad_pcb only)
|
||||
INPUT_FILE=board.kicad_pcb
|
||||
OUTPUT_FILE=outputs/capacitors_by_net_pair.json
|
||||
|
||||
# Compare KiCad locations (.kicad_pcb)
|
||||
FILE1=board_v1.kicad_pcb
|
||||
FILE2=board_v2.kicad_pcb
|
||||
COMPARE_OUTPUT=outputs/compare_locations.json
|
||||
THRESHOLD=1.0
|
||||
|
||||
# Spreadsheet diff (designator column, data from row 10)
|
||||
SHEET1=sheet1.xlsx
|
||||
SHEET2=sheet2.xlsx
|
||||
DIFF_OUTPUT=outputs/spreadsheet_diff.json
|
||||
DESIGNATOR_COL=0
|
||||
START_ROW=9
|
||||
|
||||
# Find bottom termination parts (search description column only; no package column)
|
||||
SHEET=sheet.xlsx
|
||||
BOTTOM_TERM_OUTPUT=outputs/bottom_termination_parts.json
|
||||
DESCRIPTION_COL=1
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,2 +1,3 @@
|
||||
*.json
|
||||
|
||||
.env
|
||||
outputs/
|
||||
|
||||
85
README.md
85
README.md
@@ -1,5 +1,9 @@
|
||||
# Altium Scripts
|
||||
|
||||
**Convention:** All Python scripts use **.env** for input/output paths (and optional settings); you can override any value via **CLI**. All scripts write JSON output to the **`outputs/`** folder by default. Copy `.env.example` to `.env` and edit.
|
||||
|
||||
---
|
||||
|
||||
## Capacitors by net pair
|
||||
|
||||
**Script:** `CapacitorsByNetPair.pas`
|
||||
@@ -55,35 +59,27 @@ Finds all **two-pad components** on the PCB that share the same two nets (e.g. d
|
||||
}
|
||||
```
|
||||
|
||||
### Protel PCB 2.8 ASCII — easier (Python, no Altium)
|
||||
### KiCad .kicad_pcb (Python script)
|
||||
|
||||
**Yes — Protel PCB 2.8 ASCII is easier.** It’s plain text, so you can parse it with Python and no OLE/binary handling. You don’t need Altium running.
|
||||
**Script:** `capacitors_by_net_pair.py` — **KiCad only.** Reads a `.kicad_pcb` file and outputs the same JSON (net pair → capacitors with designator, value, package, total capacitance).
|
||||
|
||||
1. **Export from Altium:** Open your PcbDoc → **File → Save As** (or **Export**) → choose **PCB 2.8 ASCII** or **Protel PCB ASCII** if your version offers it. Some versions use **File → Save Copy As** with format “PCB Binary/ASCII” or similar.
|
||||
2. **Run the Python script** on the exported `.pcb` / `.PcbDoc` (ASCII) file:
|
||||
|
||||
```bash
|
||||
python3 capacitors_by_net_pair.py board.PcbDoc
|
||||
python3 capacitors_by_net_pair.py board.PcbDoc -o out.json
|
||||
```
|
||||
|
||||
**Input/output from .env:** Copy `.env.example` to `.env` and set `INPUT_FILE` and `OUTPUT_FILE`. The script reads these when the optional `python-dotenv` package is installed; CLI arguments override them. Without `.env`, you can still pass the input file and `-o` on the command line. By default the JSON is written to **`output/capacitors_by_net_pair.json`** (the `output/` directory is created if needed).
|
||||
|
||||
See **`capacitors_by_net_pair.py`** for the script. It parses COMP/PATTERN/VALUE and NET/PIN data from the ASCII file and produces the same JSON shape as the DelphiScript.
|
||||
|
||||
**Test file:** `tests/sample_protel_ascii.pcb` is a minimal Protel PCB 2.8 ASCII sample. Run:
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
python3 capacitors_by_net_pair.py tests/sample_protel_ascii.pcb -o tests/out.json
|
||||
python3 capacitors_by_net_pair.py board.kicad_pcb
|
||||
python3 capacitors_by_net_pair.py board.kicad_pcb -o outputs/capacitors_by_net_pair.json
|
||||
```
|
||||
|
||||
**Input/output from .env:** Set `INPUT_FILE` and `OUTPUT_FILE`; CLI overrides. Default output: **`outputs/capacitors_by_net_pair.json`**.
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Compare component locations (two Protel files)
|
||||
## Compare component locations (two KiCad files)
|
||||
|
||||
**Script:** `compare_protel_locations.py`
|
||||
|
||||
Loads two Protel PCB 2.8 ASCII files and reports **which components have moved** between them. Component position is the centroid of pin coordinates. Output is written to `output/compare_locations.json` by default.
|
||||
Loads two KiCad `.kicad_pcb` files and reports **which components have moved** between them. Component position is the centroid of pad `(at x y)` coordinates. Output is written to `outputs/compare_locations.json` by default.
|
||||
|
||||
- **Moved:** designators with different (x, y) in file2, with old position, new position, and distance.
|
||||
- **Only in file1 / only in file2:** components that appear in just one file.
|
||||
@@ -91,16 +87,61 @@ Loads two Protel PCB 2.8 ASCII files and reports **which components have moved**
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
python3 compare_protel_locations.py board_v1.pcb board_v2.pcb
|
||||
python3 compare_protel_locations.py board_v1.pcb board_v2.pcb -o output/compare_locations.json
|
||||
python3 compare_protel_locations.py board_v1.kicad_pcb board_v2.kicad_pcb
|
||||
python3 compare_protel_locations.py board_v1.kicad_pcb board_v2.kicad_pcb -o outputs/compare_locations.json
|
||||
```
|
||||
|
||||
Use **.env** (optional): set `FILE1`, `FILE2`, and `COMPARE_OUTPUT`; CLI arguments override them. Use `--threshold N` to set the minimum position change to count as moved (default 1.0).
|
||||
|
||||
**Test:** `tests/sample_protel_ascii.pcb` and `tests/sample_protel_ascii_rev2.pcb` (C1 and C2 moved in rev2):
|
||||
---
|
||||
|
||||
## Spreadsheet diff by designator
|
||||
|
||||
**Script:** `diff_spreadsheets.py`
|
||||
|
||||
Compares two spreadsheets (`.xlsx` or `.csv`) on a designator column. Data is read **from row 10** by default (first 9 rows skipped). Outputs which designators are only in file1, only in file2, or in both.
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
python3 compare_protel_locations.py tests/sample_protel_ascii.pcb tests/sample_protel_ascii_rev2.pcb
|
||||
pip install pandas openpyxl
|
||||
python3 diff_spreadsheets.py sheet1.xlsx sheet2.xlsx -o outputs/spreadsheet_diff.json
|
||||
```
|
||||
|
||||
Options: `--designator-col 0` (0-based column index), `--start-row 9` (0-based; 9 = row 10). Env: `SHEET1`, `SHEET2`, `DIFF_OUTPUT`.
|
||||
|
||||
**Test:** `tests/sheet1.csv` and `tests/sheet2.csv` (designators from row 10):
|
||||
|
||||
```bash
|
||||
python3 diff_spreadsheets.py tests/sheet1.csv tests/sheet2.csv --start-row 9
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Find bottom termination parts (QFN, DFN, BGA) by description
|
||||
|
||||
**Script:** `find_bottom_termination_parts.py`
|
||||
|
||||
Reads the same spreadsheet format (designator column, data from row 10) plus **description** and optionally **package** columns. Finds components whose description or package indicates **bottom termination**, including:
|
||||
|
||||
- **Package types:** QFN, DFN, BGA, LGA, SON, MLF, MLP, WDFN, WQFN, VQFN, etc.
|
||||
- **Generic:** “bottom termination” (e.g. with 0201 or 0402)
|
||||
|
||||
Outputs matching designators, description, package, and the matched pattern to `outputs/bottom_termination_parts.json`.
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 1
|
||||
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 3
|
||||
```
|
||||
|
||||
Env: `SHEET`, `BOTTOM_TERM_OUTPUT`, `DESCRIPTION_COL` (default 1), `START_ROW` (default 9). No package column; only description is searched.
|
||||
|
||||
**Test:** `tests/sheet_with_descriptions.csv` (description col 3):
|
||||
|
||||
```bash
|
||||
python3 find_bottom_termination_parts.py tests/sheet_with_descriptions.csv --description-col 3 --start-row 9
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
166
bottom_termination.py
Normal file
166
bottom_termination.py
Normal file
@@ -0,0 +1,166 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
From a spreadsheet (designator + description column, data from row 10), find components
|
||||
whose description indicates bottom termination package type (e.g. QFN, DFN, BGA).
|
||||
Only the description column is searched (no separate package column).
|
||||
|
||||
Usage:
|
||||
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 1 [-o output.json]
|
||||
python3 find_bottom_termination_parts.py # uses SHEET, DESCRIPTION_COL, BOTTOM_TERM_OUTPUT from .env
|
||||
|
||||
All paths and options: use .env or CLI; CLI overrides .env.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
try:
|
||||
import pandas as pd
|
||||
except ImportError:
|
||||
print("Install pandas and openpyxl: pip install pandas openpyxl", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# Bottom-termination package patterns (case-insensitive, word boundary where needed)
|
||||
BOTTOM_TERMINATION_PATTERNS = [
|
||||
r"\bqfn\b", # Quad Flat No-leads
|
||||
r"\bdfn\b", # Dual Flat No-leads
|
||||
r"\bbga\b", # Ball Grid Array
|
||||
r"\blga\b", # Land Grid Array
|
||||
r"\bson\b", # Small Outline No-lead
|
||||
r"\bmlf\b", # Micro Leadframe
|
||||
r"\bmlp\b",
|
||||
r"\bwdfn\b",
|
||||
r"\bwqfn\b",
|
||||
r"\bvqfn\b",
|
||||
r"\buqfn\b",
|
||||
r"\bxqfn\b",
|
||||
r"\b bottom\s+termination\b", # generic + 0201/0402
|
||||
]
|
||||
BOTTOM_TERM_REGEXES = [re.compile(p, re.I) for p in BOTTOM_TERMINATION_PATTERNS]
|
||||
|
||||
|
||||
def load_sheet(
|
||||
path: str,
|
||||
designator_col: int = 0,
|
||||
description_col: int = 1,
|
||||
start_row: int = 9,
|
||||
) -> list[dict]:
|
||||
"""Load spreadsheet; return list of {designator, description} from start_row (0-based; 9 = row 10)."""
|
||||
p = Path(path)
|
||||
if not p.exists():
|
||||
raise FileNotFoundError(path)
|
||||
suffix = p.suffix.lower()
|
||||
if suffix in (".xlsx", ".xls"):
|
||||
df = pd.read_excel(path, header=None, engine="openpyxl" if suffix == ".xlsx" else None)
|
||||
elif suffix == ".csv":
|
||||
df = pd.read_csv(path, header=None)
|
||||
else:
|
||||
raise ValueError(f"Unsupported format: {suffix}")
|
||||
if max(designator_col, description_col) >= df.shape[1]:
|
||||
raise ValueError(f"Sheet has {df.shape[1]} columns; need at least {max(designator_col, description_col) + 1}")
|
||||
rows = []
|
||||
for i in range(start_row, len(df)):
|
||||
des = str(df.iloc[i, designator_col]).strip() if pd.notna(df.iloc[i, designator_col]) else ""
|
||||
desc = str(df.iloc[i, description_col]).strip() if pd.notna(df.iloc[i, description_col]) else ""
|
||||
if des or desc:
|
||||
rows.append({"designator": des, "description": desc})
|
||||
return rows
|
||||
|
||||
|
||||
def is_bottom_termination_in_description(description: str) -> tuple[bool, str]:
|
||||
"""
|
||||
True if description (case-insensitive) contains a bottom termination package type
|
||||
(e.g. QFN, DFN, BGA). Returns (matched, pattern_matched) e.g. (True, "qfn").
|
||||
"""
|
||||
if not (description or "").strip():
|
||||
return False, ""
|
||||
d = description.lower()
|
||||
for pat in BOTTOM_TERM_REGEXES:
|
||||
if pat.search(d):
|
||||
name = pat.pattern.replace(r"\b", "").replace("\\s+", " ").strip()
|
||||
return True, name
|
||||
return False, ""
|
||||
|
||||
|
||||
def main() -> int:
|
||||
default_sheet = os.environ.get("SHEET", "").strip() or None
|
||||
default_out = os.environ.get("BOTTOM_TERM_OUTPUT", "").strip() or "outputs/bottom_termination_parts.json"
|
||||
default_des_col = os.environ.get("DESCRIPTION_COL", "").strip()
|
||||
default_start = os.environ.get("START_ROW", "").strip()
|
||||
try:
|
||||
default_des_col = int(default_des_col) if default_des_col else 1
|
||||
except ValueError:
|
||||
default_des_col = 1
|
||||
try:
|
||||
default_start = int(default_start) if default_start else 9
|
||||
except ValueError:
|
||||
default_start = 9
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Find components with bottom termination package types (e.g. QFN, DFN, BGA); only description column is searched"
|
||||
)
|
||||
parser.add_argument("file", nargs="?", default=default_sheet, help="Spreadsheet path (default: SHEET from .env)")
|
||||
parser.add_argument("-o", "--output", default=default_out, help="Output JSON (default: BOTTOM_TERM_OUTPUT from .env)")
|
||||
parser.add_argument("--designator-col", type=int, default=0, help="Designator column 0-based (default 0)")
|
||||
parser.add_argument("--description-col", type=int, default=default_des_col, metavar="COL", help="Description column 0-based (searched for package types; default: DESCRIPTION_COL from .env or 1)")
|
||||
parser.add_argument("--start-row", type=int, default=default_start, help="First data row 0-based, 9=row 10 (default: START_ROW from .env or 9)")
|
||||
args = parser.parse_args()
|
||||
|
||||
path = (args.file or default_sheet or "").strip()
|
||||
if not path:
|
||||
parser.error("No spreadsheet. Set SHEET in .env or pass file path.")
|
||||
if not Path(path).exists():
|
||||
print(f"Error: file not found: {path}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
try:
|
||||
rows = load_sheet(
|
||||
path,
|
||||
args.designator_col,
|
||||
args.description_col,
|
||||
args.start_row,
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Error: {e}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
matches = []
|
||||
for r in rows:
|
||||
ok, pattern = is_bottom_termination_in_description(r["description"])
|
||||
if ok:
|
||||
matches.append({**r, "matched_pattern": pattern})
|
||||
|
||||
report = {
|
||||
"file": path,
|
||||
"designator_col": args.designator_col,
|
||||
"description_col": args.description_col,
|
||||
"start_row": args.start_row + 1,
|
||||
"count": len(matches),
|
||||
"parts": matches,
|
||||
}
|
||||
|
||||
out_path = Path(args.output)
|
||||
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
out_path.write_text(json.dumps(report, indent=2), encoding="utf-8")
|
||||
print(f"Wrote {args.output}")
|
||||
print(f"Found {len(matches)} components with bottom termination (by description column)")
|
||||
for m in matches:
|
||||
extra = f" [{m['matched_pattern']}]" if m.get("matched_pattern") else ""
|
||||
desc = m["description"][:70] + "..." if len(m["description"]) > 70 else m["description"]
|
||||
print(f" {m['designator']}: {desc}{extra}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,15 +1,13 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Parse Protel PCB 2.8 ASCII (or Protel 99 SE PCB ASCII) and output capacitors
|
||||
by net pair: JSON with net pair as key, designator/value/package and total
|
||||
capacitance per net pair.
|
||||
Parse KiCad .kicad_pcb and output capacitors by net pair: JSON with net pair
|
||||
as key, designator/value/package and total capacitance per net pair.
|
||||
|
||||
Input/output paths are read from .env (INPUT_FILE, OUTPUT_FILE) if set;
|
||||
CLI arguments override them.
|
||||
All paths: use .env (INPUT_FILE, OUTPUT_FILE) or CLI; CLI overrides .env.
|
||||
|
||||
Usage:
|
||||
python capacitors_by_net_pair.py [file.pcb] [-o output.json]
|
||||
# or set in .env: INPUT_FILE=board.pcb, OUTPUT_FILE=out.json
|
||||
python capacitors_by_net_pair.py [file.kicad_pcb] [-o output.json]
|
||||
# or set in .env: INPUT_FILE=board.kicad_pcb, OUTPUT_FILE=out.json
|
||||
"""
|
||||
|
||||
import argparse
|
||||
@@ -75,100 +73,87 @@ def format_capacitance(cap_f: float) -> str:
|
||||
return f"{cap_f}F"
|
||||
|
||||
|
||||
# --- Protel PCB 2.8 ASCII parsing --------------------------------------------
|
||||
# --- KiCad .kicad_pcb parsing -------------------------------------------------
|
||||
|
||||
def parse_protel_ascii(path: str) -> tuple[list[dict], dict]:
|
||||
def _find_matching_paren(text: str, start: int) -> int:
|
||||
"""Return index of closing paren for the ( at start."""
|
||||
depth = 0
|
||||
for i in range(start, len(text)):
|
||||
if text[i] == "(":
|
||||
depth += 1
|
||||
elif text[i] == ")":
|
||||
depth -= 1
|
||||
if depth == 0:
|
||||
return i
|
||||
return -1
|
||||
|
||||
|
||||
def parse_kicad_pcb(path: str) -> tuple[list[dict], dict]:
|
||||
"""
|
||||
Parse file and return (components, pin_to_net).
|
||||
components: list of {
|
||||
"designator": str,
|
||||
"pattern": str,
|
||||
"value": str,
|
||||
"pins": list of (pin_id, net_name or None),
|
||||
}
|
||||
pin_to_net: (designator, pin_id) -> net_name (from NET section if present).
|
||||
Parse KiCad .kicad_pcb (s-expression) and return (components, pin_to_net).
|
||||
components: designator, pattern, value, pins (with net names from pad net).
|
||||
pin_to_net is empty (unused for KiCad).
|
||||
"""
|
||||
text = Path(path).read_text(encoding="utf-8", errors="replace")
|
||||
lines = [line.rstrip() for line in text.splitlines()]
|
||||
|
||||
# NET section: map (Designator, PinNumber) -> NetName
|
||||
# Formats: "NET" "NetName" then "C1-1" / C1-1; or NetName then Designator-Pin lines
|
||||
pin_to_net: dict[tuple[str, str], str] = {}
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
rest = line.strip()
|
||||
# "NET" "NetName" or NET NetName
|
||||
if re.match(r"^\s*NET\s+", line, re.I):
|
||||
parts = re.split(r"\s+", rest, maxsplit=2)
|
||||
current_net = (parts[1].strip('"') if len(parts) > 1 else "") or None
|
||||
i += 1
|
||||
while i < len(lines):
|
||||
conn = lines[i].strip().strip('"')
|
||||
if re.match(r"^[A-Za-z0-9_]+-\d+$", conn) and current_net:
|
||||
comp, pin = conn.split("-", 1)
|
||||
pin_to_net[(comp.upper(), pin)] = current_net
|
||||
i += 1
|
||||
elif conn.upper() in ("ENDNET", "END", ""):
|
||||
break
|
||||
else:
|
||||
i += 1
|
||||
continue
|
||||
i += 1
|
||||
|
||||
# COMP ... ENDCOMP blocks
|
||||
# Net list: (net N "Name")
|
||||
net_by_num: dict[int, str] = {}
|
||||
for m in re.finditer(r'\(\s*net\s+(\d+)\s+"([^"]*)"\s*\)', text, re.I):
|
||||
net_by_num[int(m.group(1))] = m.group(2)
|
||||
# Footprint blocks: find each (footprint "Name" and its matching )
|
||||
components: list[dict] = []
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
if line.strip().upper() != "COMP":
|
||||
i += 1
|
||||
continue
|
||||
i += 1
|
||||
designator = ""
|
||||
pattern = ""
|
||||
value = ""
|
||||
pins: list[tuple[str, str | None]] = [] # (pin_id, net_name)
|
||||
|
||||
while i < len(lines):
|
||||
ln = lines[i]
|
||||
if ln.strip().upper() == "ENDCOMP":
|
||||
i += 1
|
||||
fp_pos = 0
|
||||
while True:
|
||||
fp_start = text.find("(footprint ", fp_pos)
|
||||
if fp_start < 0:
|
||||
break
|
||||
# Designator: first token on first line after COMP that isn't PATTERN/VALUE/PIN/PINNE
|
||||
if not designator and ln.strip():
|
||||
parts = ln.strip().split()
|
||||
if parts:
|
||||
candidate = parts[0]
|
||||
if not re.match(r"^(PATTERN|VALUE|PIN|PINNE|ENDCOMP)$", candidate, re.I):
|
||||
designator = candidate
|
||||
# PATTERN = value or PATTERN value
|
||||
m = re.match(r"^\s*PATTERN\s+(.+)$", ln, re.I)
|
||||
if m:
|
||||
pattern = m.group(1).strip().strip('"')
|
||||
m = re.match(r"^\s*VALUE\s+(.+)$", ln, re.I)
|
||||
if m:
|
||||
value = m.group(1).strip().strip('"')
|
||||
# PIN: PIN <name> [net] ... or PINNE <name> <net> ... (numbers follow)
|
||||
pin_match = re.match(r"^\s*(?:PINNE|PIN)\s+(\S+)\s+(\S+)", ln, re.I)
|
||||
if pin_match:
|
||||
pin_name, second = pin_match.groups()
|
||||
net_name = second if not second.replace(".", "").isdigit() else None
|
||||
if net_name and net_name.upper() in ("PINNE", "PIN"):
|
||||
net_name = None
|
||||
if not net_name and (designator, pin_name) in pin_to_net:
|
||||
net_name = pin_to_net[(designator, pin_name)]
|
||||
elif not net_name and (designator.upper(), pin_name) in pin_to_net:
|
||||
net_name = pin_to_net[(designator.upper(), pin_name)]
|
||||
pins.append((pin_name, net_name))
|
||||
quote = text.find('"', fp_start + 10)
|
||||
if quote < 0:
|
||||
break
|
||||
quote_end = text.find('"', quote + 1)
|
||||
if quote_end < 0:
|
||||
break
|
||||
lib_name = text[quote + 1 : quote_end]
|
||||
fp_end = _find_matching_paren(text, fp_start)
|
||||
if fp_end < 0:
|
||||
break
|
||||
rest = text[fp_start:fp_end + 1]
|
||||
pattern = lib_name.split(":")[-1] if ":" in lib_name else lib_name
|
||||
ref_m = re.search(r'\(\s*fp_text\s+reference\s+"([^"]+)"', rest)
|
||||
value_m = re.search(r'\(\s*fp_text\s+value\s+"([^"]+)"', rest)
|
||||
designator = ref_m.group(1) if ref_m else ""
|
||||
value = value_m.group(1) if value_m else "?"
|
||||
pins: list[tuple[str, str | None]] = []
|
||||
# Find each (pad "N" ... and the (net num "Name") inside that pad's span
|
||||
pos = 0
|
||||
while True:
|
||||
pad_start = rest.find("(pad ", pos)
|
||||
if pad_start < 0:
|
||||
break
|
||||
quote = rest.find('"', pad_start + 5)
|
||||
if quote < 0:
|
||||
break
|
||||
quote_end = rest.find('"', quote + 1)
|
||||
if quote_end < 0:
|
||||
break
|
||||
pad_num = rest[quote + 1 : quote_end]
|
||||
depth = 0
|
||||
end = pad_start + 4
|
||||
for i, c in enumerate(rest[pad_start:], start=pad_start):
|
||||
if c == "(":
|
||||
depth += 1
|
||||
elif c == ")":
|
||||
depth -= 1
|
||||
if depth == 0:
|
||||
end = i
|
||||
break
|
||||
block = rest[pad_start:end]
|
||||
net_m = re.search(r'\(\s*net\s+(\d+)\s+"([^"]*)"\s*\)', block)
|
||||
if net_m:
|
||||
net_name = net_m.group(2) or net_by_num.get(int(net_m.group(1)), "")
|
||||
else:
|
||||
pin_simple = re.match(r"^\s*(?:PINNE|PIN)\s+(\S+)", ln, re.I)
|
||||
if pin_simple:
|
||||
pn = pin_simple.group(1)
|
||||
net_name = pin_to_net.get((designator, pn)) or pin_to_net.get((designator.upper(), pn))
|
||||
pins.append((pn, net_name))
|
||||
i += 1
|
||||
|
||||
net_name = net_by_num.get(0, "") if 0 in net_by_num else None
|
||||
pins.append((pad_num, net_name or None))
|
||||
pos = end + 1
|
||||
if designator:
|
||||
components.append({
|
||||
"designator": designator,
|
||||
@@ -176,7 +161,8 @@ def parse_protel_ascii(path: str) -> tuple[list[dict], dict]:
|
||||
"value": value or "?",
|
||||
"pins": pins,
|
||||
})
|
||||
return components, pin_to_net
|
||||
fp_pos = fp_end + 1
|
||||
return components, {}
|
||||
|
||||
|
||||
def build_net_key(net1: str, net2: str) -> str:
|
||||
@@ -188,19 +174,19 @@ def build_net_key(net1: str, net2: str) -> str:
|
||||
|
||||
def main() -> int:
|
||||
default_input = os.environ.get("INPUT_FILE", "").strip() or None
|
||||
default_output = os.environ.get("OUTPUT_FILE", "").strip() or "output/capacitors_by_net_pair.json"
|
||||
default_output = os.environ.get("OUTPUT_FILE", "").strip() or "outputs/capacitors_by_net_pair.json"
|
||||
|
||||
parser = argparse.ArgumentParser(description="List capacitors by net pair from Protel PCB 2.8 ASCII")
|
||||
parser = argparse.ArgumentParser(description="List capacitors by net pair from KiCad .kicad_pcb")
|
||||
parser.add_argument(
|
||||
"file",
|
||||
nargs="?",
|
||||
default=default_input,
|
||||
help="Path to .pcb / .PcbDoc (ASCII) file (default: INPUT_FILE from .env)",
|
||||
help="Path to .kicad_pcb file (default: INPUT_FILE from .env)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-o", "--output",
|
||||
default=default_output,
|
||||
help="Output JSON path (default: OUTPUT_FILE from .env or output/capacitors_by_net_pair.json)",
|
||||
help="Output JSON path (default: OUTPUT_FILE from .env or outputs/capacitors_by_net_pair.json)",
|
||||
)
|
||||
parser.add_argument("--all-two-pad", action="store_true", help="Include all 2-pad parts, not only C*")
|
||||
args = parser.parse_args()
|
||||
@@ -210,7 +196,7 @@ def main() -> int:
|
||||
parser.error("No input file. Set INPUT_FILE in .env or pass the file path as an argument.")
|
||||
|
||||
try:
|
||||
components, pin_to_net = parse_protel_ascii(input_path)
|
||||
components, pin_to_net = parse_kicad_pcb(input_path)
|
||||
except Exception as e:
|
||||
print(f"Parse error: {e}", file=sys.stderr)
|
||||
return 1
|
||||
122
diff_spreadsheets.py
Normal file
122
diff_spreadsheets.py
Normal file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Diff two spreadsheets by designator column. Data starts at row 10 by default.
|
||||
|
||||
Usage:
|
||||
python3 diff_spreadsheets.py file1.xlsx file2.xlsx [-o output.json]
|
||||
python3 diff_spreadsheets.py # uses SHEET1, SHEET2, DIFF_OUTPUT from .env
|
||||
|
||||
All paths and options: use .env or CLI; CLI overrides .env.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
try:
|
||||
import pandas as pd
|
||||
except ImportError:
|
||||
print("Install pandas and openpyxl: pip install pandas openpyxl", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def read_designators(path: str, designator_col: int = 0, start_row: int = 9) -> set[str]:
|
||||
"""Load spreadsheet and return set of designator values from the given column, starting at start_row (0-based; 9 = row 10)."""
|
||||
p = Path(path)
|
||||
if not p.exists():
|
||||
raise FileNotFoundError(path)
|
||||
suffix = p.suffix.lower()
|
||||
if suffix in (".xlsx", ".xls"):
|
||||
df = pd.read_excel(path, header=None, engine="openpyxl" if suffix == ".xlsx" else None)
|
||||
elif suffix == ".csv":
|
||||
df = pd.read_csv(path, header=None)
|
||||
else:
|
||||
raise ValueError(f"Unsupported format: {suffix}")
|
||||
if designator_col >= df.shape[1]:
|
||||
raise ValueError(f"Column {designator_col} not in sheet (has {df.shape[1]} columns)")
|
||||
# from start_row to end, take the designator column, drop NaN, strip strings
|
||||
col = df.iloc[start_row:, designator_col]
|
||||
values = set()
|
||||
for v in col:
|
||||
if pd.isna(v):
|
||||
continue
|
||||
s = str(v).strip()
|
||||
if s:
|
||||
values.add(s)
|
||||
return values
|
||||
|
||||
|
||||
def main() -> int:
|
||||
default1 = os.environ.get("SHEET1", "").strip() or None
|
||||
default2 = os.environ.get("SHEET2", "").strip() or None
|
||||
default_out = os.environ.get("DIFF_OUTPUT", "").strip() or "outputs/spreadsheet_diff.json"
|
||||
default_col = os.environ.get("DESIGNATOR_COL", "").strip()
|
||||
default_start = os.environ.get("START_ROW", "").strip()
|
||||
try:
|
||||
default_col = int(default_col) if default_col else 0
|
||||
except ValueError:
|
||||
default_col = 0
|
||||
try:
|
||||
default_start = int(default_start) if default_start else 9
|
||||
except ValueError:
|
||||
default_start = 9
|
||||
|
||||
parser = argparse.ArgumentParser(description="Diff two spreadsheets by designator column")
|
||||
parser.add_argument("file1", nargs="?", default=default1, help="First spreadsheet (default: SHEET1 from .env)")
|
||||
parser.add_argument("file2", nargs="?", default=default2, help="Second spreadsheet (default: SHEET2 from .env)")
|
||||
parser.add_argument("-o", "--output", default=default_out, help="Output JSON (default: DIFF_OUTPUT from .env)")
|
||||
parser.add_argument("--designator-col", type=int, default=default_col, help="Designator column 0-based (default: DESIGNATOR_COL from .env or 0)")
|
||||
parser.add_argument("--start-row", type=int, default=default_start, help="First data row 0-based, 9=row 10 (default: START_ROW from .env or 9)")
|
||||
args = parser.parse_args()
|
||||
|
||||
path1 = (args.file1 or default1 or "").strip()
|
||||
path2 = (args.file2 or default2 or "").strip()
|
||||
if not path1 or not path2:
|
||||
parser.error("Need two files. Set SHEET1 and SHEET2 in .env or pass two paths.")
|
||||
|
||||
try:
|
||||
d1 = read_designators(path1, args.designator_col, args.start_row)
|
||||
d2 = read_designators(path2, args.designator_col, args.start_row)
|
||||
except Exception as e:
|
||||
print(f"Error: {e}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
only1 = sorted(d1 - d2)
|
||||
only2 = sorted(d2 - d1)
|
||||
both = sorted(d1 & d2)
|
||||
|
||||
report = {
|
||||
"file1": path1,
|
||||
"file2": path2,
|
||||
"designator_col": args.designator_col,
|
||||
"start_row": args.start_row + 1,
|
||||
"only_in_file1": only1,
|
||||
"only_in_file2": only2,
|
||||
"in_both": both,
|
||||
"count_only_in_file1": len(only1),
|
||||
"count_only_in_file2": len(only2),
|
||||
"count_in_both": len(both),
|
||||
}
|
||||
|
||||
out_path = Path(args.output)
|
||||
out_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
out_path.write_text(json.dumps(report, indent=2), encoding="utf-8")
|
||||
print(f"Wrote {args.output}")
|
||||
print(f"Only in file1: {len(only1)} | Only in file2: {len(only2)} | In both: {len(both)}")
|
||||
if only1:
|
||||
print(" Only in file1:", ", ".join(only1[:20]) + (" ..." if len(only1) > 20 else ""))
|
||||
if only2:
|
||||
print(" Only in file2:", ", ".join(only2[:20]) + (" ..." if len(only2) > 20 else ""))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,12 +1,12 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Load two Protel PCB 2.8 ASCII files and report which components have moved
|
||||
between them. Component position is taken as the centroid of pin coordinates.
|
||||
Load two KiCad .kicad_pcb files and report which components have moved
|
||||
between them. Component position is the centroid of pad (at x y) coordinates.
|
||||
|
||||
Input/output paths can be set in .env (FILE1, FILE2, COMPARE_OUTPUT); CLI overrides.
|
||||
All paths: use .env (FILE1, FILE2, COMPARE_OUTPUT) or CLI; CLI overrides .env.
|
||||
|
||||
Usage:
|
||||
python3 compare_protel_locations.py file1.pcb file2.pcb [-o report.json]
|
||||
python3 compare_protel_locations.py file1.kicad_pcb file2.kicad_pcb [-o report.json]
|
||||
python3 compare_protel_locations.py # uses FILE1, FILE2 from .env
|
||||
"""
|
||||
|
||||
@@ -25,58 +25,66 @@ except ImportError:
|
||||
pass
|
||||
|
||||
|
||||
def _find_matching_paren(text: str, start: int) -> int:
|
||||
"""Return index of closing paren for the ( at start."""
|
||||
depth = 0
|
||||
for i in range(start, len(text)):
|
||||
if text[i] == "(":
|
||||
depth += 1
|
||||
elif text[i] == ")":
|
||||
depth -= 1
|
||||
if depth == 0:
|
||||
return i
|
||||
return -1
|
||||
|
||||
|
||||
def parse_components_with_positions(path: str) -> dict[str, dict]:
|
||||
"""
|
||||
Parse a Protel PCB 2.8 ASCII file and return a dict:
|
||||
Parse a KiCad .kicad_pcb file and return a dict:
|
||||
designator -> { "x": float, "y": float, "pins": [(x,y), ...], "pattern": str, "value": str }
|
||||
Position is the centroid of all pin coordinates (from the numeric line after each PIN line).
|
||||
Position is the centroid of all pad (at x y) coordinates.
|
||||
"""
|
||||
text = Path(path).read_text(encoding="utf-8", errors="replace")
|
||||
lines = [line.rstrip() for line in text.splitlines()]
|
||||
|
||||
components: dict[str, dict] = {}
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
if lines[i].strip().upper() != "COMP":
|
||||
i += 1
|
||||
continue
|
||||
i += 1
|
||||
designator = ""
|
||||
pattern = ""
|
||||
value = ""
|
||||
pin_coords: list[tuple[float, float]] = []
|
||||
|
||||
while i < len(lines):
|
||||
ln = lines[i]
|
||||
if ln.strip().upper() == "ENDCOMP":
|
||||
i += 1
|
||||
fp_pos = 0
|
||||
while True:
|
||||
fp_start = text.find("(footprint ", fp_pos)
|
||||
if fp_start < 0:
|
||||
break
|
||||
if not designator and ln.strip():
|
||||
parts = ln.strip().split()
|
||||
if parts and not re.match(
|
||||
r"^(PATTERN|VALUE|PIN|PINNE|ENDCOMP)$", parts[0], re.I
|
||||
):
|
||||
designator = parts[0]
|
||||
m = re.match(r"^\s*PATTERN\s+(.+)$", ln, re.I)
|
||||
if m:
|
||||
pattern = m.group(1).strip().strip('"')
|
||||
m = re.match(r"^\s*VALUE\s+(.+)$", ln, re.I)
|
||||
if m:
|
||||
value = m.group(1).strip().strip('"')
|
||||
if re.match(r"^\s*(?:PINNE|PIN)\s+", ln, re.I):
|
||||
# Next line often has coordinates: first two numbers are X Y
|
||||
if i + 1 < len(lines):
|
||||
next_ln = lines[i + 1].strip().split()
|
||||
nums = [t for t in next_ln if re.match(r"^-?\d+$", t)]
|
||||
if len(nums) >= 2:
|
||||
quote = text.find('"', fp_start + 10)
|
||||
if quote < 0:
|
||||
break
|
||||
quote_end = text.find('"', quote + 1)
|
||||
if quote_end < 0:
|
||||
break
|
||||
lib_name = text[quote + 1 : quote_end]
|
||||
fp_end = _find_matching_paren(text, fp_start)
|
||||
if fp_end < 0:
|
||||
break
|
||||
rest = text[fp_start:fp_end + 1]
|
||||
pattern = lib_name.split(":")[-1] if ":" in lib_name else lib_name
|
||||
ref_m = re.search(r'\(\s*fp_text\s+reference\s+"([^"]+)"', rest)
|
||||
value_m = re.search(r'\(\s*fp_text\s+value\s+"([^"]+)"', rest)
|
||||
designator = ref_m.group(1) if ref_m else ""
|
||||
value = value_m.group(1) if value_m else "?"
|
||||
pin_coords: list[tuple[float, float]] = []
|
||||
pos = 0
|
||||
while True:
|
||||
pad_start = rest.find("(pad ", pos)
|
||||
if pad_start < 0:
|
||||
break
|
||||
pad_end = _find_matching_paren(rest, pad_start)
|
||||
if pad_end < 0:
|
||||
break
|
||||
block = rest[pad_start:pad_end + 1]
|
||||
at_m = re.search(r'\(\s*at\s+([-\d.]+)\s+([-\d.]+)', block)
|
||||
if at_m:
|
||||
try:
|
||||
x, y = int(nums[0]), int(nums[1])
|
||||
pin_coords.append((float(x), float(y)))
|
||||
x, y = float(at_m.group(1)), float(at_m.group(2))
|
||||
pin_coords.append((x, y))
|
||||
except ValueError:
|
||||
pass
|
||||
i += 1
|
||||
i += 1
|
||||
|
||||
pos = pad_end + 1
|
||||
if designator and pin_coords:
|
||||
cx = sum(p[0] for p in pin_coords) / len(pin_coords)
|
||||
cy = sum(p[1] for p in pin_coords) / len(pin_coords)
|
||||
@@ -87,6 +95,7 @@ def parse_components_with_positions(path: str) -> dict[str, dict]:
|
||||
"pattern": pattern or "",
|
||||
"value": value or "?",
|
||||
}
|
||||
fp_pos = fp_end + 1
|
||||
return components
|
||||
|
||||
|
||||
@@ -99,35 +108,40 @@ def main() -> int:
|
||||
default_file2 = os.environ.get("FILE2", "").strip() or None
|
||||
default_output = (
|
||||
os.environ.get("COMPARE_OUTPUT", "").strip()
|
||||
or "output/compare_locations.json"
|
||||
or "outputs/compare_locations.json"
|
||||
)
|
||||
default_threshold = os.environ.get("THRESHOLD", "").strip()
|
||||
try:
|
||||
default_threshold = float(default_threshold) if default_threshold else 1.0
|
||||
except ValueError:
|
||||
default_threshold = 1.0
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Compare two Protel PCB ASCII files and list components that moved"
|
||||
description="Compare two KiCad .kicad_pcb files and list components that moved"
|
||||
)
|
||||
parser.add_argument(
|
||||
"file1",
|
||||
nargs="?",
|
||||
default=default_file1,
|
||||
help="First PCB file (default: FILE1 from .env)",
|
||||
help="First .kicad_pcb file (default: FILE1 from .env)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"file2",
|
||||
nargs="?",
|
||||
default=default_file2,
|
||||
help="Second PCB file (default: FILE2 from .env)",
|
||||
help="Second .kicad_pcb file (default: FILE2 from .env)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--output",
|
||||
default=default_output,
|
||||
help="Output JSON path (default: COMPARE_OUTPUT from .env or output/compare_locations.json)",
|
||||
help="Output JSON path (default: COMPARE_OUTPUT from .env or outputs/compare_locations.json)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--threshold",
|
||||
type=float,
|
||||
default=1.0,
|
||||
help="Minimum position change to count as moved (default: 1.0)",
|
||||
default=default_threshold,
|
||||
help="Minimum position change to count as moved (default: THRESHOLD from .env or 1.0)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
@@ -135,7 +149,7 @@ def main() -> int:
|
||||
path2 = (args.file2 or default_file2 or "").strip()
|
||||
if not path1 or not path2:
|
||||
parser.error(
|
||||
"Need two PCB files. Set FILE1 and FILE2 in .env or pass two paths."
|
||||
"Need two .kicad_pcb files. Set FILE1 and FILE2 in .env or pass two paths."
|
||||
)
|
||||
|
||||
if not Path(path1).exists():
|
||||
@@ -1 +1,3 @@
|
||||
python-dotenv>=1.0.0
|
||||
pandas>=2.0.0
|
||||
openpyxl>=3.1.0
|
||||
|
||||
13
tests/sheet1.csv
Normal file
13
tests/sheet1.csv
Normal file
@@ -0,0 +1,13 @@
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
C1,10uF,0805
|
||||
C2,22uF,0805
|
||||
C3,1uF,0603
|
||||
R1,10k,0805
|
||||
|
13
tests/sheet2.csv
Normal file
13
tests/sheet2.csv
Normal file
@@ -0,0 +1,13 @@
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
x
|
||||
C1,10uF,0805
|
||||
C3,1uF,0603
|
||||
R1,10k,0805
|
||||
C4,100nF,0603
|
||||
|
16
tests/sheet_with_descriptions.csv
Normal file
16
tests/sheet_with_descriptions.csv
Normal file
@@ -0,0 +1,16 @@
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
x,x,x,x
|
||||
C1,10uF,0805,Capacitor MLCC bottom termination 0201 10uF 10V
|
||||
C2,22uF,0805,Capacitor MLCC 22uF 16V
|
||||
C3,1uF,0603,Capacitor MLCC bottom termination 0402 1uF 6.3V
|
||||
R1,10k,0805,Resistor thick film 10k 1%
|
||||
R2,100R,0201,Resistor bottom termination 0201 100ohm
|
||||
U1,IC Regulator,QFN-16,3.3V LDO QFN-16 300mA
|
||||
U2,MCU,DFN-8,ARM Cortex-M0+ DFN-8
|
||||
|
Reference in New Issue
Block a user