Compare commits

...

10 Commits

Author SHA1 Message Date
758991d75d Use outputs/ for all JSON output
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:36:14 +13:00
22b4117990 Update README for new scripts and options
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:43 +13:00
6a5c2d09aa Add sheet with descriptions for bottom term test
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:43 +13:00
eba6e77ef2 Add sheet test files for diff
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:42 +13:00
951ad08db4 Add find bottom termination parts by description
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:41 +13:00
636c323d4b Add spreadsheet diff by designator
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:41 +13:00
6c1ae59f2d Use .env or CLI for paths and options
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:40 +13:00
ed5c186f4d Update .env.example for all scripts
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:39 +13:00
414c78a9c4 Add pandas and openpyxl to requirements
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:33:39 +13:00
fd3d242407 Update .gitignore for output and .env
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 12:10:15 +13:00
11 changed files with 425 additions and 19 deletions

View File

@@ -1,9 +1,23 @@
# Input PCB file (Protel 2.8 ASCII). Override with CLI: python3 capacitors_by_net_pair.py <file> # All scripts: set vars in .env or pass via CLI; CLI overrides .env.
INPUT_FILE=board.pcb
# Output JSON path. Override with: python3 capacitors_by_net_pair.py -o out.json
OUTPUT_FILE=output/capacitors_by_net_pair.json
# Compare locations: first and second Protel PCB file # Capacitors by net pair
INPUT_FILE=board.pcb
OUTPUT_FILE=outputs/capacitors_by_net_pair.json
# Compare Protel locations
FILE1=board_v1.pcb FILE1=board_v1.pcb
FILE2=board_v2.pcb FILE2=board_v2.pcb
COMPARE_OUTPUT=output/compare_locations.json COMPARE_OUTPUT=outputs/compare_locations.json
THRESHOLD=1.0
# Spreadsheet diff (designator column, data from row 10)
SHEET1=sheet1.xlsx
SHEET2=sheet2.xlsx
DIFF_OUTPUT=outputs/spreadsheet_diff.json
DESIGNATOR_COL=0
START_ROW=9
# Find bottom termination parts (search description column only; no package column)
SHEET=sheet.xlsx
BOTTOM_TERM_OUTPUT=outputs/bottom_termination_parts.json
DESCRIPTION_COL=1

3
.gitignore vendored
View File

@@ -1,2 +1,3 @@
*.json *.json
.env
outputs/

View File

@@ -1,5 +1,9 @@
# Altium Scripts # Altium Scripts
**Convention:** All Python scripts use **.env** for input/output paths (and optional settings); you can override any value via **CLI**. All scripts write JSON output to the **`outputs/`** folder by default. Copy `.env.example` to `.env` and edit.
---
## Capacitors by net pair ## Capacitors by net pair
**Script:** `CapacitorsByNetPair.pas` **Script:** `CapacitorsByNetPair.pas`
@@ -67,7 +71,7 @@ Finds all **two-pad components** on the PCB that share the same two nets (e.g. d
python3 capacitors_by_net_pair.py board.PcbDoc -o out.json python3 capacitors_by_net_pair.py board.PcbDoc -o out.json
``` ```
**Input/output from .env:** Copy `.env.example` to `.env` and set `INPUT_FILE` and `OUTPUT_FILE`. The script reads these when the optional `python-dotenv` package is installed; CLI arguments override them. Without `.env`, you can still pass the input file and `-o` on the command line. By default the JSON is written to **`output/capacitors_by_net_pair.json`** (the `output/` directory is created if needed). **Input/output from .env:** Copy `.env.example` to `.env` and set `INPUT_FILE` and `OUTPUT_FILE`. The script reads these when the optional `python-dotenv` package is installed; CLI arguments override them. Without `.env`, you can still pass the input file and `-o` on the command line. By default the JSON is written to **`outputs/capacitors_by_net_pair.json`** (the `outputs/` directory is created if needed).
See **`capacitors_by_net_pair.py`** for the script. It parses COMP/PATTERN/VALUE and NET/PIN data from the ASCII file and produces the same JSON shape as the DelphiScript. See **`capacitors_by_net_pair.py`** for the script. It parses COMP/PATTERN/VALUE and NET/PIN data from the ASCII file and produces the same JSON shape as the DelphiScript.
@@ -83,7 +87,7 @@ python3 capacitors_by_net_pair.py tests/sample_protel_ascii.pcb -o tests/out.jso
**Script:** `compare_protel_locations.py` **Script:** `compare_protel_locations.py`
Loads two Protel PCB 2.8 ASCII files and reports **which components have moved** between them. Component position is the centroid of pin coordinates. Output is written to `output/compare_locations.json` by default. Loads two Protel PCB 2.8 ASCII files and reports **which components have moved** between them. Component position is the centroid of pin coordinates. Output is written to `outputs/compare_locations.json` by default.
- **Moved:** designators with different (x, y) in file2, with old position, new position, and distance. - **Moved:** designators with different (x, y) in file2, with old position, new position, and distance.
- **Only in file1 / only in file2:** components that appear in just one file. - **Only in file1 / only in file2:** components that appear in just one file.
@@ -92,7 +96,7 @@ Loads two Protel PCB 2.8 ASCII files and reports **which components have moved**
```bash ```bash
python3 compare_protel_locations.py board_v1.pcb board_v2.pcb python3 compare_protel_locations.py board_v1.pcb board_v2.pcb
python3 compare_protel_locations.py board_v1.pcb board_v2.pcb -o output/compare_locations.json python3 compare_protel_locations.py board_v1.pcb board_v2.pcb -o outputs/compare_locations.json
``` ```
Use **.env** (optional): set `FILE1`, `FILE2`, and `COMPARE_OUTPUT`; CLI arguments override them. Use `--threshold N` to set the minimum position change to count as moved (default 1.0). Use **.env** (optional): set `FILE1`, `FILE2`, and `COMPARE_OUTPUT`; CLI arguments override them. Use `--threshold N` to set the minimum position change to count as moved (default 1.0).
@@ -103,6 +107,57 @@ Use **.env** (optional): set `FILE1`, `FILE2`, and `COMPARE_OUTPUT`; CLI argumen
python3 compare_protel_locations.py tests/sample_protel_ascii.pcb tests/sample_protel_ascii_rev2.pcb python3 compare_protel_locations.py tests/sample_protel_ascii.pcb tests/sample_protel_ascii_rev2.pcb
``` ```
---
## Spreadsheet diff by designator
**Script:** `diff_spreadsheets.py`
Compares two spreadsheets (`.xlsx` or `.csv`) on a designator column. Data is read **from row 10** by default (first 9 rows skipped). Outputs which designators are only in file1, only in file2, or in both.
**Usage:**
```bash
pip install pandas openpyxl
python3 diff_spreadsheets.py sheet1.xlsx sheet2.xlsx -o outputs/spreadsheet_diff.json
```
Options: `--designator-col 0` (0-based column index), `--start-row 9` (0-based; 9 = row 10). Env: `SHEET1`, `SHEET2`, `DIFF_OUTPUT`.
**Test:** `tests/sheet1.csv` and `tests/sheet2.csv` (designators from row 10):
```bash
python3 diff_spreadsheets.py tests/sheet1.csv tests/sheet2.csv --start-row 9
```
---
## Find bottom termination parts (QFN, DFN, BGA) by description
**Script:** `find_bottom_termination_parts.py`
Reads the same spreadsheet format (designator column, data from row 10) plus **description** and optionally **package** columns. Finds components whose description or package indicates **bottom termination**, including:
- **Package types:** QFN, DFN, BGA, LGA, SON, MLF, MLP, WDFN, WQFN, VQFN, etc.
- **Generic:** “bottom termination” (e.g. with 0201 or 0402)
Outputs matching designators, description, package, and the matched pattern to `outputs/bottom_termination_parts.json`.
**Usage:**
```bash
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 1
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 3
```
Env: `SHEET`, `BOTTOM_TERM_OUTPUT`, `DESCRIPTION_COL` (default 1), `START_ROW` (default 9). No package column; only description is searched.
**Test:** `tests/sheet_with_descriptions.csv` (description col 3):
```bash
python3 find_bottom_termination_parts.py tests/sheet_with_descriptions.csv --description-col 3 --start-row 9
```
### Notes ### Notes
- Only components with **exactly two pads** (each on a net) and **designator starting with `C`** are included (treated as capacitors). To include all two-pad parts, edit the script and remove the `And (UpperCase(Copy(Component.Name.Text, 1, 1)) = 'C')` condition. - Only components with **exactly two pads** (each on a net) and **designator starting with `C`** are included (treated as capacitors). To include all two-pad parts, edit the script and remove the `And (UpperCase(Copy(Component.Name.Text, 1, 1)) = 'C')` condition.

View File

@@ -4,8 +4,7 @@ Parse Protel PCB 2.8 ASCII (or Protel 99 SE PCB ASCII) and output capacitors
by net pair: JSON with net pair as key, designator/value/package and total by net pair: JSON with net pair as key, designator/value/package and total
capacitance per net pair. capacitance per net pair.
Input/output paths are read from .env (INPUT_FILE, OUTPUT_FILE) if set; All paths: use .env (INPUT_FILE, OUTPUT_FILE) or CLI; CLI overrides .env.
CLI arguments override them.
Usage: Usage:
python capacitors_by_net_pair.py [file.pcb] [-o output.json] python capacitors_by_net_pair.py [file.pcb] [-o output.json]
@@ -188,7 +187,7 @@ def build_net_key(net1: str, net2: str) -> str:
def main() -> int: def main() -> int:
default_input = os.environ.get("INPUT_FILE", "").strip() or None default_input = os.environ.get("INPUT_FILE", "").strip() or None
default_output = os.environ.get("OUTPUT_FILE", "").strip() or "output/capacitors_by_net_pair.json" default_output = os.environ.get("OUTPUT_FILE", "").strip() or "outputs/capacitors_by_net_pair.json"
parser = argparse.ArgumentParser(description="List capacitors by net pair from Protel PCB 2.8 ASCII") parser = argparse.ArgumentParser(description="List capacitors by net pair from Protel PCB 2.8 ASCII")
parser.add_argument( parser.add_argument(
@@ -200,7 +199,7 @@ def main() -> int:
parser.add_argument( parser.add_argument(
"-o", "--output", "-o", "--output",
default=default_output, default=default_output,
help="Output JSON path (default: OUTPUT_FILE from .env or output/capacitors_by_net_pair.json)", help="Output JSON path (default: OUTPUT_FILE from .env or outputs/capacitors_by_net_pair.json)",
) )
parser.add_argument("--all-two-pad", action="store_true", help="Include all 2-pad parts, not only C*") parser.add_argument("--all-two-pad", action="store_true", help="Include all 2-pad parts, not only C*")
args = parser.parse_args() args = parser.parse_args()

View File

@@ -3,7 +3,7 @@
Load two Protel PCB 2.8 ASCII files and report which components have moved Load two Protel PCB 2.8 ASCII files and report which components have moved
between them. Component position is taken as the centroid of pin coordinates. between them. Component position is taken as the centroid of pin coordinates.
Input/output paths can be set in .env (FILE1, FILE2, COMPARE_OUTPUT); CLI overrides. All paths: use .env (FILE1, FILE2, COMPARE_OUTPUT) or CLI; CLI overrides .env.
Usage: Usage:
python3 compare_protel_locations.py file1.pcb file2.pcb [-o report.json] python3 compare_protel_locations.py file1.pcb file2.pcb [-o report.json]
@@ -99,8 +99,13 @@ def main() -> int:
default_file2 = os.environ.get("FILE2", "").strip() or None default_file2 = os.environ.get("FILE2", "").strip() or None
default_output = ( default_output = (
os.environ.get("COMPARE_OUTPUT", "").strip() os.environ.get("COMPARE_OUTPUT", "").strip()
or "output/compare_locations.json" or "outputs/compare_locations.json"
) )
default_threshold = os.environ.get("THRESHOLD", "").strip()
try:
default_threshold = float(default_threshold) if default_threshold else 1.0
except ValueError:
default_threshold = 1.0
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="Compare two Protel PCB ASCII files and list components that moved" description="Compare two Protel PCB ASCII files and list components that moved"
@@ -121,13 +126,13 @@ def main() -> int:
"-o", "-o",
"--output", "--output",
default=default_output, default=default_output,
help="Output JSON path (default: COMPARE_OUTPUT from .env or output/compare_locations.json)", help="Output JSON path (default: COMPARE_OUTPUT from .env or outputs/compare_locations.json)",
) )
parser.add_argument( parser.add_argument(
"--threshold", "--threshold",
type=float, type=float,
default=1.0, default=default_threshold,
help="Minimum position change to count as moved (default: 1.0)", help="Minimum position change to count as moved (default: THRESHOLD from .env or 1.0)",
) )
args = parser.parse_args() args = parser.parse_args()

122
diff_spreadsheets.py Normal file
View File

@@ -0,0 +1,122 @@
#!/usr/bin/env python3
"""
Diff two spreadsheets by designator column. Data starts at row 10 by default.
Usage:
python3 diff_spreadsheets.py file1.xlsx file2.xlsx [-o output.json]
python3 diff_spreadsheets.py # uses SHEET1, SHEET2, DIFF_OUTPUT from .env
All paths and options: use .env or CLI; CLI overrides .env.
"""
import argparse
import json
import os
import sys
from pathlib import Path
try:
from dotenv import load_dotenv
load_dotenv()
except ImportError:
pass
try:
import pandas as pd
except ImportError:
print("Install pandas and openpyxl: pip install pandas openpyxl", file=sys.stderr)
sys.exit(1)
def read_designators(path: str, designator_col: int = 0, start_row: int = 9) -> set[str]:
"""Load spreadsheet and return set of designator values from the given column, starting at start_row (0-based; 9 = row 10)."""
p = Path(path)
if not p.exists():
raise FileNotFoundError(path)
suffix = p.suffix.lower()
if suffix in (".xlsx", ".xls"):
df = pd.read_excel(path, header=None, engine="openpyxl" if suffix == ".xlsx" else None)
elif suffix == ".csv":
df = pd.read_csv(path, header=None)
else:
raise ValueError(f"Unsupported format: {suffix}")
if designator_col >= df.shape[1]:
raise ValueError(f"Column {designator_col} not in sheet (has {df.shape[1]} columns)")
# from start_row to end, take the designator column, drop NaN, strip strings
col = df.iloc[start_row:, designator_col]
values = set()
for v in col:
if pd.isna(v):
continue
s = str(v).strip()
if s:
values.add(s)
return values
def main() -> int:
default1 = os.environ.get("SHEET1", "").strip() or None
default2 = os.environ.get("SHEET2", "").strip() or None
default_out = os.environ.get("DIFF_OUTPUT", "").strip() or "outputs/spreadsheet_diff.json"
default_col = os.environ.get("DESIGNATOR_COL", "").strip()
default_start = os.environ.get("START_ROW", "").strip()
try:
default_col = int(default_col) if default_col else 0
except ValueError:
default_col = 0
try:
default_start = int(default_start) if default_start else 9
except ValueError:
default_start = 9
parser = argparse.ArgumentParser(description="Diff two spreadsheets by designator column")
parser.add_argument("file1", nargs="?", default=default1, help="First spreadsheet (default: SHEET1 from .env)")
parser.add_argument("file2", nargs="?", default=default2, help="Second spreadsheet (default: SHEET2 from .env)")
parser.add_argument("-o", "--output", default=default_out, help="Output JSON (default: DIFF_OUTPUT from .env)")
parser.add_argument("--designator-col", type=int, default=default_col, help="Designator column 0-based (default: DESIGNATOR_COL from .env or 0)")
parser.add_argument("--start-row", type=int, default=default_start, help="First data row 0-based, 9=row 10 (default: START_ROW from .env or 9)")
args = parser.parse_args()
path1 = (args.file1 or default1 or "").strip()
path2 = (args.file2 or default2 or "").strip()
if not path1 or not path2:
parser.error("Need two files. Set SHEET1 and SHEET2 in .env or pass two paths.")
try:
d1 = read_designators(path1, args.designator_col, args.start_row)
d2 = read_designators(path2, args.designator_col, args.start_row)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
return 1
only1 = sorted(d1 - d2)
only2 = sorted(d2 - d1)
both = sorted(d1 & d2)
report = {
"file1": path1,
"file2": path2,
"designator_col": args.designator_col,
"start_row": args.start_row + 1,
"only_in_file1": only1,
"only_in_file2": only2,
"in_both": both,
"count_only_in_file1": len(only1),
"count_only_in_file2": len(only2),
"count_in_both": len(both),
}
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text(json.dumps(report, indent=2), encoding="utf-8")
print(f"Wrote {args.output}")
print(f"Only in file1: {len(only1)} | Only in file2: {len(only2)} | In both: {len(both)}")
if only1:
print(" Only in file1:", ", ".join(only1[:20]) + (" ..." if len(only1) > 20 else ""))
if only2:
print(" Only in file2:", ", ".join(only2[:20]) + (" ..." if len(only2) > 20 else ""))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,166 @@
#!/usr/bin/env python3
"""
From a spreadsheet (designator + description column, data from row 10), find components
whose description indicates bottom termination package type (e.g. QFN, DFN, BGA).
Only the description column is searched (no separate package column).
Usage:
python3 find_bottom_termination_parts.py sheet.xlsx --description-col 1 [-o output.json]
python3 find_bottom_termination_parts.py # uses SHEET, DESCRIPTION_COL, BOTTOM_TERM_OUTPUT from .env
All paths and options: use .env or CLI; CLI overrides .env.
"""
import argparse
import json
import os
import re
import sys
from pathlib import Path
try:
from dotenv import load_dotenv
load_dotenv()
except ImportError:
pass
try:
import pandas as pd
except ImportError:
print("Install pandas and openpyxl: pip install pandas openpyxl", file=sys.stderr)
sys.exit(1)
# Bottom-termination package patterns (case-insensitive, word boundary where needed)
BOTTOM_TERMINATION_PATTERNS = [
r"\bqfn\b", # Quad Flat No-leads
r"\bdfn\b", # Dual Flat No-leads
r"\bbga\b", # Ball Grid Array
r"\blga\b", # Land Grid Array
r"\bson\b", # Small Outline No-lead
r"\bmlf\b", # Micro Leadframe
r"\bmlp\b",
r"\bwdfn\b",
r"\bwqfn\b",
r"\bvqfn\b",
r"\buqfn\b",
r"\bxqfn\b",
r"\b bottom\s+termination\b", # generic + 0201/0402
]
BOTTOM_TERM_REGEXES = [re.compile(p, re.I) for p in BOTTOM_TERMINATION_PATTERNS]
def load_sheet(
path: str,
designator_col: int = 0,
description_col: int = 1,
start_row: int = 9,
) -> list[dict]:
"""Load spreadsheet; return list of {designator, description} from start_row (0-based; 9 = row 10)."""
p = Path(path)
if not p.exists():
raise FileNotFoundError(path)
suffix = p.suffix.lower()
if suffix in (".xlsx", ".xls"):
df = pd.read_excel(path, header=None, engine="openpyxl" if suffix == ".xlsx" else None)
elif suffix == ".csv":
df = pd.read_csv(path, header=None)
else:
raise ValueError(f"Unsupported format: {suffix}")
if max(designator_col, description_col) >= df.shape[1]:
raise ValueError(f"Sheet has {df.shape[1]} columns; need at least {max(designator_col, description_col) + 1}")
rows = []
for i in range(start_row, len(df)):
des = str(df.iloc[i, designator_col]).strip() if pd.notna(df.iloc[i, designator_col]) else ""
desc = str(df.iloc[i, description_col]).strip() if pd.notna(df.iloc[i, description_col]) else ""
if des or desc:
rows.append({"designator": des, "description": desc})
return rows
def is_bottom_termination_in_description(description: str) -> tuple[bool, str]:
"""
True if description (case-insensitive) contains a bottom termination package type
(e.g. QFN, DFN, BGA). Returns (matched, pattern_matched) e.g. (True, "qfn").
"""
if not (description or "").strip():
return False, ""
d = description.lower()
for pat in BOTTOM_TERM_REGEXES:
if pat.search(d):
name = pat.pattern.replace(r"\b", "").replace("\\s+", " ").strip()
return True, name
return False, ""
def main() -> int:
default_sheet = os.environ.get("SHEET", "").strip() or None
default_out = os.environ.get("BOTTOM_TERM_OUTPUT", "").strip() or "outputs/bottom_termination_parts.json"
default_des_col = os.environ.get("DESCRIPTION_COL", "").strip()
default_start = os.environ.get("START_ROW", "").strip()
try:
default_des_col = int(default_des_col) if default_des_col else 1
except ValueError:
default_des_col = 1
try:
default_start = int(default_start) if default_start else 9
except ValueError:
default_start = 9
parser = argparse.ArgumentParser(
description="Find components with bottom termination package types (e.g. QFN, DFN, BGA); only description column is searched"
)
parser.add_argument("file", nargs="?", default=default_sheet, help="Spreadsheet path (default: SHEET from .env)")
parser.add_argument("-o", "--output", default=default_out, help="Output JSON (default: BOTTOM_TERM_OUTPUT from .env)")
parser.add_argument("--designator-col", type=int, default=0, help="Designator column 0-based (default 0)")
parser.add_argument("--description-col", type=int, default=default_des_col, metavar="COL", help="Description column 0-based (searched for package types; default: DESCRIPTION_COL from .env or 1)")
parser.add_argument("--start-row", type=int, default=default_start, help="First data row 0-based, 9=row 10 (default: START_ROW from .env or 9)")
args = parser.parse_args()
path = (args.file or default_sheet or "").strip()
if not path:
parser.error("No spreadsheet. Set SHEET in .env or pass file path.")
if not Path(path).exists():
print(f"Error: file not found: {path}", file=sys.stderr)
return 1
try:
rows = load_sheet(
path,
args.designator_col,
args.description_col,
args.start_row,
)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
return 1
matches = []
for r in rows:
ok, pattern = is_bottom_termination_in_description(r["description"])
if ok:
matches.append({**r, "matched_pattern": pattern})
report = {
"file": path,
"designator_col": args.designator_col,
"description_col": args.description_col,
"start_row": args.start_row + 1,
"count": len(matches),
"parts": matches,
}
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text(json.dumps(report, indent=2), encoding="utf-8")
print(f"Wrote {args.output}")
print(f"Found {len(matches)} components with bottom termination (by description column)")
for m in matches:
extra = f" [{m['matched_pattern']}]" if m.get("matched_pattern") else ""
desc = m["description"][:70] + "..." if len(m["description"]) > 70 else m["description"]
print(f" {m['designator']}: {desc}{extra}")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1 +1,3 @@
python-dotenv>=1.0.0 python-dotenv>=1.0.0
pandas>=2.0.0
openpyxl>=3.1.0

13
tests/sheet1.csv Normal file
View File

@@ -0,0 +1,13 @@
x
x
x
x
x
x
x
x
x
C1,10uF,0805
C2,22uF,0805
C3,1uF,0603
R1,10k,0805
1 x
2 x
3 x
4 x
5 x
6 x
7 x
8 x
9 x
10 C1,10uF,0805
11 C2,22uF,0805
12 C3,1uF,0603
13 R1,10k,0805

13
tests/sheet2.csv Normal file
View File

@@ -0,0 +1,13 @@
x
x
x
x
x
x
x
x
x
C1,10uF,0805
C3,1uF,0603
R1,10k,0805
C4,100nF,0603
1 x
2 x
3 x
4 x
5 x
6 x
7 x
8 x
9 x
10 C1,10uF,0805
11 C3,1uF,0603
12 R1,10k,0805
13 C4,100nF,0603

View File

@@ -0,0 +1,16 @@
x,x,x,x
x,x,x,x
x,x,x,x
x,x,x,x
x,x,x,x
x,x,x,x
x,x,x,x
x,x,x,x
x,x,x,x
C1,10uF,0805,Capacitor MLCC bottom termination 0201 10uF 10V
C2,22uF,0805,Capacitor MLCC 22uF 16V
C3,1uF,0603,Capacitor MLCC bottom termination 0402 1uF 6.3V
R1,10k,0805,Resistor thick film 10k 1%
R2,100R,0201,Resistor bottom termination 0201 100ohm
U1,IC Regulator,QFN-16,3.3V LDO QFN-16 300mA
U2,MCU,DFN-8,ARM Cortex-M0+ DFN-8
1 x x x x
2 x x x x
3 x x x x
4 x x x x
5 x x x x
6 x x x x
7 x x x x
8 x x x x
9 x x x x
10 C1 10uF 0805 Capacitor MLCC bottom termination 0201 10uF 10V
11 C2 22uF 0805 Capacitor MLCC 22uF 16V
12 C3 1uF 0603 Capacitor MLCC bottom termination 0402 1uF 6.3V
13 R1 10k 0805 Resistor thick film 10k 1%
14 R2 100R 0201 Resistor bottom termination 0201 100ohm
15 U1 IC Regulator QFN-16 3.3V LDO QFN-16 300mA
16 U2 MCU DFN-8 ARM Cortex-M0+ DFN-8