「設計の話」と「動くシステム」のあいだには大きな谷があります。この記事はその谷を30分で渡るためのパッケージです。 無料記事「【有料級】川合のdatabase設計、全部見せます」では、データベースの設計思想(WHO・WHAT・HOW構造、自律反応レイヤー)を解説しました。 この記事はその完全実装版です。Pythonコード5本・launchd設定ファイル1本・CSV雛形7ファイル・セットアップ手順・自分の事業に置き換える方法を全公開します。 ターゲット読者は明確です。
このパッケージで手に入るのは以下の3点です。 1. 動くコード一式
| ファイル | 役割 | 行数 |
|---|---|---|
score_atoms.py | 5軸スコアリング → atom_scores.csv + top_atoms.md | 約180 |
build_cross_analytics.py | チャネル×テーマ行列 → theme_channel_matrix.csv | 約110 |
append_to_db.py | JSON入力で pain/what/how 自動追記(重複検知付き) | 約160 |
update_db.sh | launchd起点の連鎖実行ラッパー | 約15 |
com.kawai.atom-score.plist | macOS launchd WatchPaths定義 | 約25 |
2. CSV雛形(7ファイル)
persona.csv / pain.csv / what.csv / how.csv / atoms.csv / pipeline.csv / outputs.csv のスキーマとサンプル行。
3. セットアップ手順 + 置換ガイド
ゼロから動かす手順と、自分の事業ドメインに置き換える方法。
atomを5軸(テーマ勢い・実績・展開余地・鮮度・パターン適合性)でスコアリングし、上位推薦リストを生成します。
/path/to/database/system/scripts/score_atoms.py に保存。
#!/usr/bin/env python3
"""
atoms.csv が更新されたら自動実行 (launchd WatchPaths)
出力:
database/content/atom_scores.csv — 全atom 5軸スコア
database/content/top_atoms.md — 上位10件推薦リスト
"""
import csv, re, json
from datetime import date, datetime
from pathlib import Path
from collections import defaultdict
BASE = Path("/Users/YOURNAME/Documents/database")
# ← 自分のパスに置換
TODAY = date.today()
def load_csv(path):
if not path.exists():
return []
with open(path, newline="") as f:
return list(csv.DictReader(f))
def theme_priority_score(priority, health="STABLE"):
table = {
("S", "HOT"): 30, ("S", "STABLE"): 25,
("A", "HOT"): 25, ("A", "STABLE"): 20,
("B", "HOT"): 20, ("B", "STABLE"): 15,
}
if health == "COOLING":
return 5
if health in ("DEAD", "SATURATED"):
return 0
return table.get((priority, health), 10)
def days_since(date_str):
if not date_str:
return 9999
for fmt in ("%Y-%m-%d", "%Y/%m/%d", "%Y-%m"):
try:
d = datetime.strptime(date_str[:len(fmt)], fmt).date()
return (TODAY - d).days
except ValueError:
continue
return 9999
def main():
atoms = load_csv(BASE / "content/atoms.csv")
outputs = load_csv(BASE / "content/outputs.csv")
taxonomy = load_csv(BASE / "content/theme_taxonomy.csv")
pipeline = load_csv(BASE / "content/pipeline.csv")
theme_map = {t["theme_id"]: {"priority": t.get("priority","B"),
"health": t.get("health","STABLE")}
for t in taxonomy}
atom_channels = defaultdict(set)
atom_last_date = {}
for o in outputs:
pub = o.get("published","")
for aid in o.get("atom_ids","").split(";"):
aid = aid.strip()
if not aid: continue
atom_channels[aid].add(o.get("channel",""))
if aid not in atom_last_date or pub > atom_last_date[aid]:
atom_last_date[aid] = pub
queued_atoms = set()
for p in pipeline:
if p.get("status") == "queued":
for aid in p.get("atom_ids","").split(";"):
queued_atoms.add(aid.strip())
recent_outputs = sorted(outputs, key=lambda r: r.get("published",""))[-3:]
recent_channels = [o.get("channel","") for o in recent_outputs]
all_channels = {"post","note","instagram","lab","training","x_article"}
scores = []
for atom in atoms:
if atom.get("status") == "archived":
continue
aid = atom["atom_id"]
theme = atom.get("theme","")
ti = theme_map.get(theme, {"priority":"B","health":"STABLE"})
s1 = theme_priority_score(ti["priority"], ti["health"])
try:
imp = int(str(atom.get("best_imp","") or 0).replace(",",""))
except ValueError:
imp = 0
if aid not in atom_channels:
s2 = 12
elif imp >= 100000: s2 = 25
elif imp >= 50000: s2 = 20
elif imp >= 20000: s2 = 15
elif imp >= 5000: s2 = 10
else: s2 = 5
unused_ch = all_channels - atom_channels.get(aid, set())
if aid in queued_atoms: s3 = 10
elif len(unused_ch) >= 2: s3 = 20
elif len(unused_ch) == 1: s3 = 15
else: s3 = 0
days = days_since(atom_last_date.get(aid,""))
if days == 9999: s4 = 15
elif days >= 30: s4 = 12
elif days >= 14: s4 = 8
elif days >= 7: s4 = 3
else: s4 = 0
main_ch = (atom.get("channels_used") or "").split(";")[0]
recent_same = recent_channels.count(main_ch)
s5 = [10, 7, 3, 0][min(recent_same, 3)]
scores.append({
"atom_id": aid, "claim": atom.get("claim",""), "theme": theme,
"persona_ids": atom.get("persona_ids",""),
"status": atom.get("status",""),
"total_score": s1+s2+s3+s4+s5,
"s1_theme": s1, "s2_potential": s2, "s3_expansion": s3,
"s4_freshness": s4, "s5_pattern": s5,
"last_updated": TODAY.isoformat(),
})
scores.sort(key=lambda r: -r["total_score"])
fieldnames = ["atom_id","claim","theme","persona_ids","status","total_score",
"s1_theme","s2_potential","s3_expansion","s4_freshness",
"s5_pattern","last_updated"]
with open(BASE / "content/atom_scores.csv","w",newline="") as f:
w = csv.DictWriter(f, fieldnames=fieldnames, quoting=csv.QUOTE_ALL)
w.writeheader(); w.writerows(scores)
top = scores[:10]
md_lines = [f"
# Top Atoms — {TODAY.isoformat()}\n"]
for i, s in enumerate(top, 1):
md_lines.append(
f"{i}. **{s['claim'][:60]}** `{s['atom_id']}` `{s['theme']}`\n"
f" スコア: {s['total_score']} "
f"(テーマ{s['s1_theme']}/実績{s['s2_potential']}/展開{s['s3_expansion']}"
f"/鮮度{s['s4_freshness']}/パターン{s['s5_pattern']})\n"
)
(BASE / "content/top_atoms.md").write_text("".join(md_lines), encoding="utf-8")
print(f"scored={len(scores)}件 → atom_scores.csv + top_atoms.md")
if __name__ == "__main__":
main()
チャネル×テーマのパフォーマンス行列を生成します。「どのテーマがどのチャネルで成立するか」が一目で分かる集計表。
/path/to/database/system/scripts/build_cross_analytics.py に保存。
#!/usr/bin/env python3
"""全チャネル横断テーマ×パフォーマンス行列を生成"""
import csv
from pathlib import Path
from collections import defaultdict, Counter
from datetime import date
BASE = Path("/Users/YOURNAME/Documents/database")
# ← 自分のパスに置換
TODAY = date.today().isoformat()
def load_csv(path):
if not path.exists(): return []
with open(path, newline="") as f:
return list(csv.DictReader(f))
def safe_int(v):
try:
return int(str(v or 0).replace(",","").replace(" ",""))
except ValueError:
return 0
def main():
outputs = load_csv(BASE / "content/outputs.csv")
atoms = load_csv(BASE / "content/atoms.csv")
atom_meta = {a["atom_id"]: {"theme": a.get("theme","")} for a in atoms}
matrix = defaultdict(lambda: {
"count":0, "imp_or_views":0, "likes_or_suki":0,
"engage_sum":0.0, "engage_n":0
})
for o in outputs:
ch = o.get("channel","")
if not ch: continue
themes = set()
for aid in o.get("atom_ids","").split(";"):
t = atom_meta.get(aid.strip(), {}).get("theme","")
if t: themes.add(t)
if not themes: themes = {"(unknown)"}
for t in themes:
k = (ch, t)
matrix[k]["count"] += 1
matrix[k]["imp_or_views"] += safe_int(o.get("views_or_imp",0))
matrix[k]["likes_or_suki"] += safe_int(o.get("likes_or_suki",0))
er = str(o.get("engage_rate","") or "").replace("%","").strip()
if er:
try:
matrix[k]["engage_sum"] += float(er)
matrix[k]["engage_n"] += 1
except ValueError: pass
fieldnames = ["channel","theme","output_count","total_imp_or_views",
"total_likes_or_suki","avg_engage_rate","updated_at"]
rows = []
for (ch, t), v in matrix.items():
avg_er = v["engage_sum"]/v["engage_n"] if v["engage_n"]>0 else 0
rows.append({
"channel": ch, "theme": t,
"output_count": v["count"],
"total_imp_or_views": v["imp_or_views"],
"total_likes_or_suki": v["likes_or_suki"],
"avg_engage_rate": f"{avg_er:.2f}%",
"updated_at": TODAY,
})
rows.sort(key=lambda r: (-r["output_count"], r["channel"], r["theme"]))
with open(BASE / "analytics/theme_channel_matrix.csv","w",newline="") as f:
w = csv.DictWriter(f, fieldnames=fieldnames, quoting=csv.QUOTE_ALL)
w.writeheader(); w.writerows(rows)
print(f"theme_channel_matrix.csv: {len(rows)}行")
for ch, n in sorted(Counter(r["channel"] for r in rows).items()):
print(f" {ch}: {n}テーマ")
if __name__ == "__main__":
main()
JSON形式で渡したインサイトを pain/what/how の3CSVに自動追記します。重複検知付き。
/path/to/scripts/append_to_db.py に保存。
#!/usr/bin/env python3
"""
JSON → pain/what/how CSV 自動追記
使い方: echo '<JSON>' | python3 append_to_db.py
"""
import csv, json, sys, re
from datetime import date
from pathlib import Path
BASE = Path("/Users/YOURNAME/Documents/database/insight")
# ← 自分のパス
TODAY = date.today().isoformat()
def next_id(filepath, prefix):
max_n = 0
if filepath.exists():
with open(filepath, newline="") as f:
for r in csv.DictReader(f):
m = re.search(r"\d+", r.get("id",""))
if m: max_n = max(max_n, int(m.group()))
return f"{prefix}{max_n+1:03d}"
def existing_map(filepath):
if not filepath.exists(): return {}
with open(filepath, newline="") as f:
return {r.get("title","").strip(): r.get("id","")
for r in csv.DictReader(f)}
def append_rows(filepath, fieldnames, rows):
exists = filepath.exists()
with open(filepath, "a", newline="") as f:
w = csv.DictWriter(f, fieldnames=fieldnames, quoting=csv.QUOTE_ALL)
if not exists: w.writeheader()
w.writerows(rows)
def main():
if len(sys.argv) >= 3 and sys.argv[1] == "--json":
data = json.loads(sys.argv[2])
else:
data = json.load(sys.stdin)
pain_fields = ["id","title","domain","severity","affected_scope",
"evidence","hypothesis","persona_ids","atom_ids","created_at"]
what_fields = ["id","pain_id","title","description","atom_ids","created_at"]
how_fields = ["id","what_id","persona_ids","channel","format",
"awareness_level","priority","status","created_at"]
pain_file = BASE / "pain.csv"
what_file = BASE / "what.csv"
how_file = BASE / "how.csv"
pain_map = existing_map(pain_file)
what_map = existing_map(what_file)
added = {"pain":0, "what":0, "how":0}
skipped = {"pain":0, "what":0}
for pain_data in data.get("pains", []):
title = pain_data["title"].strip()
if title in pain_map:
pain_id = pain_map[title]
skipped["pain"] += 1
else:
pain_id = next_id(pain_file, "PR")
pain_map[title] = pain_id
append_rows(pain_file, pain_fields, [{
"id":pain_id, "title":pain_data["title"],
"domain":pain_data.get("domain",""),
"severity":pain_data.get("severity",""),
"affected_scope":pain_data.get("affected_scope",""),
"evidence":pain_data.get("evidence",""),
"hypothesis":pain_data.get("hypothesis",""),
"persona_ids":pain_data.get("persona_ids",""),
"atom_ids":"", "created_at":TODAY,
}])
added["pain"] += 1
for what_data in pain_data.get("whats", []):
wtitle = what_data["title"].strip()
if wtitle in what_map:
skipped["what"] += 1
continue
what_id = next_id(what_file, "W")
what_map[wtitle] = what_id
append_rows(what_file, what_fields, [{
"id":what_id, "pain_id":pain_id,
"title":what_data["title"],
"description":what_data.get("description",""),
"atom_ids":"", "created_at":TODAY,
}])
added["what"] += 1
for how_data in what_data.get("hows", []):
how_id = next_id(how_file, "H")
append_rows(how_file, how_fields, [{
"id":how_id, "what_id":what_id,
"persona_ids":pain_data.get("persona_ids",""),
"channel":how_data.get("channel",""),
"format":how_data.get("format",""),
"awareness_level":how_data.get("awareness_level","problem_aware"),
"priority":how_data.get("priority","3"),
"status":"idea", "created_at":TODAY,
}])
added["how"] += 1
print(f"追記 — pain:{added['pain']} / what:{added['what']} / how:{added['how']}")
if skipped["pain"] or skipped["what"]:
print(f"重複スキップ — pain:{skipped['pain']} / what:{skipped['what']}")
if __name__ == "__main__":
main()
launchdから起動されるラッパー。スコアリングと横断分析を連鎖実行します。
/path/to/database/system/scripts/update_db.sh に保存。chmod +x で実行権限を付与。
#!/bin/bash
LOG_DIR="/Users/YOURNAME/Documents/database/system/logs"
mkdir -p "$LOG_DIR"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] update_db.sh 起動" >> "$LOG_DIR/update_db.log"
python3 /Users/YOURNAME/Documents/database/system/scripts/score_atoms.py \
>> "$LOG_DIR/update_db.log" 2>&1
python3 /Users/YOURNAME/Documents/database/system/scripts/build_cross_analytics.py \
>> "$LOG_DIR/update_db.log" 2>&1
echo "[$(date '+%Y-%m-%d %H:%M:%S')] 完了" >> "$LOG_DIR/update_db.log"
macOS launchdのジョブ定義。WatchPathsで指定したCSVが変更されると update_db.sh を起動します。
~/Library/LaunchAgents/com.YOURNAME.atom-score.plist に保存。
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.YOURNAME.atom-score</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/Users/YOURNAME/Documents/database/system/scripts/update_db.sh</string>
</array>
<key>WatchPaths</key>
<array>
<string>/Users/YOURNAME/Documents/database/content/atoms.csv</string>
<string>/Users/YOURNAME/Documents/database/content/outputs.csv</string>
</array>
<key>StandardOutPath</key>
<string>/Users/YOURNAME/Documents/database/system/logs/atom-score.log</string>
<key>StandardErrorPath</key>
<string>/Users/YOURNAME/Documents/database/system/logs/atom-score.err</string>
<key>RunAtLoad</key>
<false/>
</dict>
</plist>
各CSVのスキーマとサンプル行。database/insight/ と database/content/ に配置します。
persona_id,label,pain_domain,awareness_level,channel_affinity,description
PE001,フリーランサー・個人クリエイター,business/creator,problem_aware,post/note,副業〜専業。価格設定・案件獲得に悩む層
PE002,中小企業AI担当・経営者,ai_adoption/business,unaware,note/seminar/consulting,AI導入済みだが活用できていない
| 列 | 用途 |
|---|---|
persona_id | PE001形式の一意ID |
awareness_level | unaware / problem_aware / solution_seeking |
channel_affinity | このペルソナに届くチャネル |
id,title,domain,severity,affected_scope,evidence,hypothesis,persona_ids,atom_ids,created_at
PR001,AIエージェント導入後の不活性化,ai_adoption,5,中小企業導入済43%中22%放置,McKinsey2025/中小DX失敗率64%,業務プロセス再設計なし,PE002,,2026-04-18
| 列 | 用途 |
|---|---|
severity | 1〜5。5は事業損失直結 |
evidence | 根拠ソース(複数はセミコロン区切り) |
persona_ids | この痛みを持つペルソナ |
id,pain_id,title,description,atom_ids,created_at
W001,PR001,AI浸透診断コンテンツ,なぜ使われないかの構造分析,,2026-04-18
pain_id で痛みと結合。atom_ids で実装コンテンツと接続。
id,what_id,persona_ids,channel,format,awareness_level,priority,status,created_at
H001,W001,PE002,note,記事(調査数値付き),problem_aware,1,idea,2026-04-18
| 列 | 用途 |
|---|---|
channel | post / note / instagram / lab / training / consulting |
awareness_level | チャネル選択の根拠 |
priority | 1(最優先)〜5 |
atom_id,claim,evidence,theme,source,origin,status,created,channels_used,best_imp,best_views,best_engage_rate,persona_ids
A001,Figmaを使う理由がもう見つからない,Claude Code触るほどFigmaを開く回数が減る実体験,figma-obsolete,experience,experience,noted,2026-02,post;note,154526,4182,0.48%,PE004
pipe_id,type,atom_ids,source_output_id,target_channel,priority,status,created,title,note_url,archive_path,views,suki_rate,theme,menta_plan,rationale,persona_ids,pain_id
P001,atom,A010,,note,1,queued,2026-04-15,,,,,,,figma-obsolete,,展開条件達成,PE004,PR003
| 列 | 用途 |
|---|---|
type | atom(新規)/ repost(既存転用) |
status | queued → drafting → published |
output_id,channel,title,atom_ids,published,views_or_imp,likes_or_suki,engage_rate,url,persona_ids,pain_id
OP001,post,Figmaを使う理由がもう見つからない,A001,2026-02-15,154526,1234,0.48%,https://x.com/...,PE004,PR003
theme_id,label,parent,priority,post_imp_total,post_like_total,note_views_total,note_suki_total
figma-obsolete,Figma不要論,,A,334551,1123,4182,38
ai-philosophy,AI哲学・思考,,A,315428,3613,,
priority(S/A/B)と health(HOT/STABLE/COOLING/DEAD/SATURATED)でスコアリングに影響します。
mkdir -p ~/Documents/database/{insight,content,analytics,system/scripts,system/logs}
Part 3 の各CSVを以下の場所に保存:
database/
├── insight/
│ ├── persona.csv
│ ├── pain.csv
│ ├── what.csv
│ └── how.csv
├── content/
│ ├── atoms.csv
│ ├── pipeline.csv
│ ├── outputs.csv
│ └── theme_taxonomy.csv
└── analytics/
└── (空。スクリプトが自動生成)
Part 2 の各スクリプトを保存。/Users/YOURNAME/ を自分のユーザー名に全置換します。
chmod +x ~/Documents/database/system/scripts/update_db.sh
cp com.YOURNAME.atom-score.plist ~/Library/LaunchAgents/
launchctl load ~/Library/LaunchAgents/com.YOURNAME.atom-score.plist
ロード成功は launchctl list | grep atom-score で確認できます。
atoms.csvを1行追加して保存:
echo '"A002","新しい主張","根拠","ai-philosophy","experience","experience","raw","2026-04-18","","","","","PE001"' \
>> ~/Documents/database/content/atoms.csv
5秒以内に以下のファイルが自動生成されます:
ls -la ~/Documents/database/content/atom_scores.csv
ls -la ~/Documents/database/content/top_atoms.md
ls -la ~/Documents/database/analytics/theme_channel_matrix.csv
cat ~/Documents/database/system/logs/update_db.log
ログに「scored=N件」「theme_channel_matrix.csv: N行」が出ていれば成功です。
echo '{
"pains": [{
"title": "テスト痛み",
"domain": "business",
"severity": 3,
"persona_ids": "PE001",
"whats": [{
"title": "テスト解決策",
"hows": [{"channel": "post", "format": "テスト", "awareness_level": "unaware", "priority": 3}]
}]
}]
}' | python3 ~/Documents/database/system/scripts/append_to_db.py
「追記 — pain:1 / what:1 / how:1」が出れば動作中。
このシステムは特定業界に依存しません。persona.csv / pain.csv / what.csv の中身を自分の事業に置き換えるだけで使えます。
| 層 | 元データ(私の例) | 置換後(歯科向け) |
|---|---|---|
| persona | フリーランサー・中小企業AI担当 | 歯科医院長・予防歯科ハイジニスト |
| pain | AI不活性化・スキルアップ経路不明 | 新患減少・自費診療化失敗・スタッフ採用難 |
| what | AI浸透診断コンテンツ | DX診断・予約システム最適化 |
| how | post/note/consulting | LINE/メルマガ/院長向けセミナー |
| 業界 | 推奨ソース |
|---|---|
| 歯科 | r/dentistry / 日本歯科医師会調査 / クリニック経営白書 |
| 不動産 | 全宅連業界レポート / 国交省調査 / 賃貸経営に関するQ&A |
| 美容 | Hot Pepper Beauty業界動向 / 美容師求人サイトの離職理由 |
| 飲食 | 食べログユーザー口コミ低評価 / 飲食店ドットコム経営者ブログ |
自分の事業のテーマを定義します。例えば歯科なら:
theme_id,label,priority
new-patient-acquisition,新患獲得,S
self-pay-conversion,自費診療化,A
staff-retention,スタッフ定着,B
priority は事業の重点度で決めます。Sは最優先、Bは補助的。
基本形が動いたあとの拡張候補です。
スコア上位10件が更新されたらSlackに通知:
# update_db.sh に追記
TOP_ATOMS=$(head -20 /path/to/database/content/top_atoms.md)
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"今日の推薦\n${TOP_ATOMS}\"}" \
https://hooks.slack.com/services/YOUR/WEBHOOK/URL
all_channels に新チャネルを追加:
all_channels = {"post","note","instagram","lab","training","x_article",
"youtube","podcast","mail_magazine"}
GmailをWatchPathsの代わりにIMAPでポーリングし、特定ラベルのメールから痛みを抽出してappend_to_db.pyに流す。研修・セミナーの感想メールが自動でDBに入ります。
cronで週1回 score_atoms.py の出力を集計し、テーマごとの伸び率・低下率を weekly_report.md に出力。
手に入ったもの
top_atoms.md を開けば即わかる。テーマごとのチャネル展開状況は theme_channel_matrix.csv を開けば数値で見える。
「データを置く場所」だったdatabaseが「考えるシステム」に変わります。
無料記事で設計思想、本記事で実装。両方を読めばコピペで自分のシステムが立ち上がります。
Claude Code 知らないと損する40のワザ
⬇️ 1on1で密に教えて欲しい方はこちらがオススメです。
⬇️ 法人研修をご希望の方はこちら
#AI #生成AI #AIエージェント #AI時代 #AI活用 #Python #データベース設計 #自動化 #Claude #ClaudeCode