From bec8cdaf90b8d8f1428476d353846d373a99bedb Mon Sep 17 00:00:00 2001 From: Don Lowery Date: Sat, 11 Apr 2026 03:10:04 +0000 Subject: [PATCH 1/2] Add content360 skill - Notion to Content360 sync --- External/content360/DISPLAY.json | 4 + External/content360/README.md | 79 +++ External/content360/SKILL.md | 169 +++++++ .../content360/scripts/content360_sync.py | 461 ++++++++++++++++++ 4 files changed, 713 insertions(+) create mode 100644 External/content360/DISPLAY.json create mode 100644 External/content360/README.md create mode 100644 External/content360/SKILL.md create mode 100644 External/content360/scripts/content360_sync.py diff --git a/External/content360/DISPLAY.json b/External/content360/DISPLAY.json new file mode 100644 index 0000000..1e78ac6 --- /dev/null +++ b/External/content360/DISPLAY.json @@ -0,0 +1,4 @@ +{ + "icon": "share", + "tags": ["social-media", "content", "notion", "automation"] +} \ No newline at end of file diff --git a/External/content360/README.md b/External/content360/README.md new file mode 100644 index 0000000..0ed2264 --- /dev/null +++ b/External/content360/README.md @@ -0,0 +1,79 @@ +# Content360 Integration + +Syncs posts from a **Notion content calendar** to **Content360** (app.content360.io) for scheduling across Facebook, LinkedIn, X, Instagram, YouTube, TikTok, and Pinterest. + +## Setup + +### 1. Content360 + +- Log in at https://app.content360.io +- Go to **Profile → Access Tokens** and create a new token +- Note your **Workspace UUID** from the URL (e.g. `https://app.content360.io/os/{workspace}/posts`) +- Note your **login email and password** for session-based auth + +### 2. Notion + +- Create a Notion integration at https://www.notion.so/my-integrations +- Share your content calendar database with the integration +- Note the database ID from the URL + +### 3. Set Secrets (Zo → Settings → Advanced → Secrets) + +``` +CONTENT360_EMAIL=you@example.com +CONTENT360_PASSWORD=yourpassword +CONTENT360_API_KEY=your-bearer-token +CONTENT360_ORG_ID=your-workspace-uuid +NOTION_API_KEY=your-notion-integration-key +NOTION_DATABASE_ID=your-database-id +``` + +### 4. Install Dependencies + +```bash +pip install requests +``` + +## Usage + +```bash +# Dry run — see what would be synced +python3 scripts/content360_sync.py --dry-run + +# Real sync +python3 scripts/content360_sync.py + +# Filter by platform +python3 scripts/content360_sync.py --platforms facebook,linkedin,tiktok +``` + +## Notion Database Schema + +The script expects a Notion database with these properties: + +| Property | Type | Description | +|---|---|---| +| `Posted` | Checkbox | Set to true after syncing | +| `Schedule` | Date | Optional — set to schedule instead of draft | +| `Platform` | Select | facebook, linkedin, x, instagram, youtube, tiktok, pinterest | +| `Caption` | Rich Text | Main post content | +| `Hook` | Rich Text | Opening hook/line | +| `CTA` | Rich Text | Call to action | + +## How It Works + +1. Fetches all social accounts from Content360 +2. Queries Notion for unscheduled posts (Posted = false) +3. Creates each post as a draft in Content360, mapping Notion Platform → Content360 account +4. Marks synced posts as "Posted" in Notion + +## API Notes + +Content360 uses **Inertia.js + Laravel** — all routes are under `/os/` and require: + +- `Authorization: Bearer {token}` header +- `X-Inertia: true` header +- `X-Requested-With: XMLHttpRequest` header +- `Accept: application/json` header + +The `X-Inertia-Version` header must be updated from each response (automatic in the sync script). \ No newline at end of file diff --git a/External/content360/SKILL.md b/External/content360/SKILL.md new file mode 100644 index 0000000..c35f686 --- /dev/null +++ b/External/content360/SKILL.md @@ -0,0 +1,169 @@ +--- +name: content360 +description: Integrates with Content360 (app.content360.io) to create, schedule, and publish social media content across Facebook, LinkedIn, X, Instagram, YouTube, TikTok, Pinterest, Reddit, and more. Supports post creation, scheduling, analytics, media upload, and inbox management. Built on MixPost/Laravel + Inertia.js. +compatibility: Created for Zo Computer +metadata: + author: jaknyfe.zo.computer +--- +# Content360 Skill + +## Authentication + +Content360 uses a **session + bearer token** auth mechanism built on Laravel Sanctum + Inertia.js. + +### Setup + +1. **Get your bearer token** at `https://app.content360.io/os/profile/access-tokens` +2. **Set secrets in Zo**: [Settings → Advanced → Secrets](/?t=settings&s=advanced) + - `CONTENT360_API_KEY` = your bearer token (e.g. `N4P1Rg...`) + - `CONTENT360_ORG_ID` = your workspace UUID (e.g. `3f3006c0-a68f-4ac6-b4ee-c14d70356cbb`) +3. **Set your credentials** (for session auth): + - `CONTENT360_EMAIL` = your login email + - `CONTENT360_PASSWORD` = your login password + +**Note**: Access tokens require an active web session to work. If the token returns 401, re-authenticate by logging in via the web interface or using the session login flow in `content360_sync.py`. + +### Auth Flow + +The API requires **all** of these headers on every request: +``` +Authorization: Bearer {token} +X-Requested-With: XMLHttpRequest +X-Inertia: true +X-Inertia-Version: {version} # from any authenticated page response +Accept: application/json +Content-Type: application/json +``` + +The `X-Inertia-Version` is a hash that changes on app updates. It's returned in every Inertia JSON response under the `X-Inertia-Version` response header. The script handles this automatically. + +--- + +## Base URLs + +- **App**: `https://app.content360.io` +- **API namespace**: `/os/api/{workspace}` (workspace = org UUID, e.g. `3f3006c0-a68f-4ac6-b4ee-c14d70356cbb`) +- **Web namespace**: `/os/{workspace}` (same routing, differs only by Accept header) + +--- + +## API Endpoints + +All require auth headers above. Responses are Inertia JSON. + +### Posts + +| Method | Path | Description | +|--------|------|-------------| +| GET | `/os/api/{workspace}/posts` | List posts (paginated) | +| POST | `/os/api/{workspace}/posts` | Create post | +| GET | `/os/api/{workspace}/posts/{uuid}` | Get single post | +| PUT | `/os/api/{workspace}/posts/{uuid}` | Update post | +| DELETE | `/os/api/{workspace}/posts/{uuid}` | Delete single post | +| POST | `/os/api/{workspace}/posts/add-to-queue/{uuid}` | Add to queue | +| POST | `/os/api/{workspace}/posts/schedule/{uuid}` | Schedule queued post | +| POST | `/os/api/{workspace}/posts/approve/{uuid}` | Approve post | +| POST | `/os/api/{workspace}/posts/duplicate/{uuid}` | Duplicate post | + +### Create Post Payload + +```json +{ + "content": "Post text content", + "accounts": ["126117", "126129"], + "status": "draft", + "versions": [ + { + "account_id": 126117, + "is_original": true, + "content": [ + { + "body": "
Post content
", + "media": [], + "url": "", + "opened": true + } + ] + } + ] +} +``` + +- `status`: `"draft"` or `"schedule"` +- `accounts`: array of account IDs +- `versions`: per-account content blocks (body uses HTML) +- For plain text, use `"
text
"` in body + +### Accounts + +| Method | Path | Description | +|--------|------|-------------| +| GET | `/os/api/{workspace}/accounts` | List all connected accounts | +| GET | `/os/api/{workspace}/accounts/{id}` | Get single account | + +### Media + +| Method | Path | Description | +|--------|------|-------------| +| GET | `/os/api/{workspace}/media` | List media library | +| POST | `/os/api/{workspace}/media` | Upload media | +| DELETE | `/os/api/{workspace}/media` | Delete media | + +### Tags + +| Method | Path | Description | +|--------|------|-------------| +| GET | `/os/api/{workspace}/tags` | List tags | +| POST | `/os/api/{workspace}/tags` | Create tag | +| PUT | `/os/api/{workspace}/tags/{id}` | Update tag | +| DELETE | `/os/api/{workspace}/tags/{id}` | Delete tag | + +### Other + +| Method | Path | Description | +|--------|------|-------------| +| GET | `/os/{workspace}/calendar` | Calendar view | +| GET | `/os/{workspace}/posts/rss_campaigns` | RSS campaigns | +| GET | `/os/{workspace}/repeated-posts` | Repeated posts | +| GET | `/os/{workspace}/templates` | Templates | +| GET | `/os/{workspace}/inbox` | Inbox | +| POST | `/os/api/{workspace}/posts/import` | Import posts | + +### Webhook Endpoints + +| Method | Path | Description | +|--------|------|-------------| +| GET | `/os/{workspace}/webhooks` | List webhooks | +| POST | `/os/{workspace}/webhooks/store` | Create webhook | +| PUT | `/os/{workspace}/webhooks/{id}` | Update webhook | +| DELETE | `/os/{workspace}/webhooks/{id}` | Delete webhook | + +--- + +## Connected Accounts + +Your workspace `3f3006c0-a68f-4ac6-b4ee-c14d70356cbb` has: +- **YouTube**: Noēsis (126117), D.E. Lowery (126118), Rhusty Ironheart (126116) +- **Instagram**: aaronlodge49 (126129), azchapter_demolay (22117) +- **Facebook**: Aaron Masonic Lodge #49 (6648), Arizona DeMolay Crusader Club (22114), Southern Arizona DeMolay (22115), Arizona Chapter of the Order of DeMolay (22116) +- **Instagram Direct**: delowery67 (126141) +- **Pinterest**: chrome67 (126151), cbhrhusty (126146) +- **TikTok**: DE Lowery (126130) +- **Reddit**: Chrome67 (126145) + +--- + +## Running the Sync Script + +```bash +# Dry run +python3 /home/workspace/Skills/content360/scripts/content360_sync.py --dry-run + +# Real sync +python3 /home/workspace/Skills/content360/scripts/content360_sync.py + +# Sync specific platforms +python3 /home/workspace/Skills/content360/scripts/content360_sync.py --platforms facebook,linkedin,youtube +``` + +The script reads from your Notion content calendar, creates posts in Content360 as drafts, and schedules them. See `README.md` for full setup instructions. diff --git a/External/content360/scripts/content360_sync.py b/External/content360/scripts/content360_sync.py new file mode 100644 index 0000000..78d0684 --- /dev/null +++ b/External/content360/scripts/content360_sync.py @@ -0,0 +1,461 @@ +#!/usr/bin/env python3 +""" +Content360 Sync — syncs posts from Notion content calendar to Content360. + +Usage: + python3 content360_sync.py [--dry-run] [--platforms facebook,linkedin,x,instagram,youtube,tiktok,pinterest,reddit] + +Environment variables (set in Zo Secrets): + CONTENT360_EMAIL - Login email + CONTENT360_PASSWORD - Login password + CONTENT360_API_KEY - Bearer token (from /os/profile/access-tokens) + CONTENT360_ORG_ID - Workspace UUID (e.g. 3f3006c0-a68f-4ac6-b4ee-c14d70356cbb) + NOTION_API_KEY - Notion integration secret + NOTION_DATABASE_ID - Content calendar database ID +""" + +import os +import sys +import json +import time +import argparse +import requests +from datetime import datetime +from urllib.parse import unquote + +# ── Config ────────────────────────────────────────────────────────────────── +CONTENT360_EMAIL = os.environ.get("CONTENT360_EMAIL", "") +CONTENT360_PASSWORD = os.environ.get("CONTENT360_PASSWORD", "") +CONTENT360_API_KEY = os.environ.get("CONTENT360_API_KEY", "") +CONTENT360_ORG_ID = os.environ.get("CONTENT360_ORG_ID", "") +CONTENT360_BASE_URL = "https://app.content360.io" +NOTION_API_KEY = os.environ.get("NOTION_API_KEY", "") +NOTION_DATABASE_ID = os.environ.get("NOTION_DATABASE_ID", "de201e51-282d-4c5a-973f-105dec4e96be") + +# ── Session setup ─────────────────────────────────────────────────────────── +session = requests.Session() +session.cookies = requests.cookies.RequestsCookieJar() + +INERTIA_VERSION = None # Set after first authenticated request + + +def inertia_headers(extra=None): + """Build headers required for Content360 Inertia API.""" + headers = { + "Authorization": f"Bearer {CONTENT360_API_KEY}", + "X-Requested-With": "XMLHttpRequest", + "X-Inertia": "true", + "X-Inertia-Version": INERTIA_VERSION or "", + "Accept": "application/json", + "Content-Type": "application/json", + } + if extra: + headers.update(extra) + return headers + + +def get_xsrf_token(): + """Get current XSRF token from session cookies.""" + for c in session.cookies: + if c.name == "XSRF-TOKEN": + return unquote(c.value) + return "" + + +def _ensure_auth(): + """Ensure we have a valid session. Re-authenticates if needed.""" + global INERTIA_VERSION + + # Check if we can make an authenticated request with existing session + if INERTIA_VERSION: + return True # Already authenticated + + if not CONTENT360_EMAIL or not CONTENT360_PASSWORD: + return False + + # Step 1: Get CSRF token and session cookies + login_page = session.get(f"{CONTENT360_BASE_URL}/os/login") + login_html = login_page.text + + import re + csrf_match = re.search(r'{content_text}", + "media": [], + "url": "", + "opened": True + }] + }) + + payload = { + "content": content_text, + "accounts": account_ids, + "status": status, + "versions": versions + } + + if schedule_at: + payload["schedule_at"] = schedule_at + + resp = c360_post("/posts", payload) + + if resp.status_code == 200 or resp.status_code == 201: + return resp.json() + + # Try to parse error + try: + err = resp.json() + print(f" ❌ API error: {err.get('message', err)}") + if err.get("errors"): + for field, msgs in err["errors"].items(): + for msg in msgs: + print(f" {field}: {msg}") + except: + print(f" ❌ HTTP {resp.status_code}: {resp.text[:200]}") + + return None + + +def delete_post(uuid): + """Delete a post by UUID.""" + resp = c360_delete(f"/posts/{uuid}") + return resp.status_code in (200, 204) + + +def schedule_post(uuid): + """Schedule a queued post.""" + resp = c360_post(f"/posts/schedule/{uuid}") + return resp.status_code == 200 + + +# ── Notion helpers ────────────────────────────────────────────────────────── +def notion_query_database(db_id, filter_props=None): + url = f"https://api.notion.com/v1/databases/{db_id}/query" + payload = {"page_size": 100} + if filter_props: + payload["filter"] = filter_props + resp = requests.post( + url, + headers={ + "Authorization": f"Bearer {NOTION_API_KEY}", + "Content-Type": "application/json", + "Notion-Version": "2022-06-28", + }, + json=payload, + ) + resp.raise_for_status() + return resp.json().get("results", []) + + +def extract_text(prop): + if not prop: + return "" + t = prop.get("type", "") + if t == "title": + return "".join(x.get("plain_text", "") for x in prop.get("title", [])) + if t == "rich_text": + return "".join(x.get("plain_text", "") for x in prop.get("rich_text", [])) + if t == "select": + return prop.get("select", {}).get("name", "") or "" + if t == "multi_select": + return [x.get("name", "") for x in prop.get("multi_select", [])] + if t == "date": + return prop.get("date", {}).get("start", "") or "" + if t == "checkbox": + return prop.get("checkbox", False) + if t == "number": + return prop.get("number", 0) or 0 + return "" + + +def notion_update_page(page_id, properties): + url = f"https://api.notion.com/v1/pages/{page_id}" + resp = requests.patch( + url, + headers={ + "Authorization": f"Bearer {NOTION_API_KEY}", + "Content-Type": "application/json", + "Notion-Version": "2022-06-28", + }, + json={"properties": properties}, + ) + resp.raise_for_status() + return resp.json() + + +# ── Sync logic ───────────────────────────────────────────────────────────── +def sync_notion_to_content360(dry_run=True, platforms=None): + print(f"\n{'[DRY RUN] ' if dry_run else ''}Syncing Notion → Content360") + print(f"Platforms: {platforms or 'all'}\n") + + # 0. Ensure we have credentials + if not CONTENT360_API_KEY: + print("❌ CONTENT360_API_KEY not set.") + sys.exit(1) + if not CONTENT360_ORG_ID: + print("❌ CONTENT360_ORG_ID not set.") + sys.exit(1) + + # 1. Login if no session + print("Connecting to Content360...") + if not CONTENT360_API_KEY or not CONTENT360_EMAIL: + print("⚠️ No API key, attempting session login...") + _ensure_auth() + else: + # Try with existing token, verify it works + resp = c360_get("/posts") + if resp.status_code == 401: + print("⚠️ Token expired, re-authing via session login...") + _ensure_auth() + else: + print(f"✅ Connected with bearer token") + _update_inertia_version(resp) + + # 2. Build platform → account ID mapping + print("Fetching accounts...") + accounts = get_accounts() + platform_to_accounts = {} + for acc in accounts: + provider = acc.get("provider", "").lower() + if provider not in platform_to_accounts: + platform_to_accounts[provider] = [] + platform_to_accounts[provider].append({ + "id": str(acc["id"]), + "name": acc.get("name", ""), + "provider": provider, + }) + + print(f"Found {len(accounts)} accounts across {len(platform_to_accounts)} platforms") + + # 3. Fetch unscheduled posts from Notion + print("\nFetching unscheduled posts from Notion...") + pages = notion_query_database(NOTION_DATABASE_ID) + unscheduled = [] + for page in pages: + props = page.get("properties", {}) + posted = extract_text(props.get("Posted")) + if posted: + continue + schedule_date = extract_text(props.get("Schedule")) + platform = extract_text(props.get("Platform")) + if platforms and platform.lower() not in [p.lower() for p in platforms]: + continue + unscheduled.append({ + "page_id": page["id"], + "props": props, + "platform": platform, + "schedule_date": schedule_date, + }) + + print(f"Found {len(unscheduled)} unscheduled post(s) in Notion") + + if not unscheduled: + print("Nothing to sync.") + return [] + + # 4. Create posts in Content360 + created = [] + for item in unscheduled: + props = item["props"] + page_id = item["page_id"] + platform = item["platform"].lower() + schedule_date = item["schedule_date"] + + # Get caption/hook/cta + caption = extract_text(props.get("Caption", "")) + hook = extract_text(props.get("Hook", "")) + cta = extract_text(props.get("CTA", "")) + + if hook: + full_text = f"{hook}\n\n{caption}" + else: + full_text = caption + + if cta: + full_text += f"\n\n{cta}" + + if not full_text.strip(): + print(f" ⚠️ Skipping empty post (page {page_id})") + continue + + # Select account + accs = platform_to_accounts.get(platform, []) + if not accs: + print(f" ⚠️ No {platform} account found, skipping") + continue + + account_ids = [accs[0]["id"]] # Use first account for this platform + status = "draft" if not schedule_date else "schedule" + + if dry_run: + print(f" [DRY RUN] Would create post for {platform}: {hook or caption[:60]}...") + continue + + print(f" Creating {platform} post: {hook or caption[:50]}...") + post = create_post(full_text, account_ids, status=status) + + if post: + uuid = post.get("uuid", "") + print(f" ✅ Created post UUID: {uuid}") + + # Mark as posted in Notion + try: + notion_update_page(page_id, {"Posted": {"checkbox": True}}) + print(f" ✅ Marked as Posted in Notion") + except Exception as e: + print(f" ⚠️ Could not update Notion: {e}") + + created.append({ + "page_id": page_id, + "platform": platform, + "uuid": uuid, + "post": post, + }) + else: + print(f" ❌ Failed to create post") + + time.sleep(0.5) # Rate limit + + print(f"\n{'[DRY RUN] ' if dry_run else ''}Created {len(created)} post(s) in Content360") + return created + + +# ── CLI ───────────────────────────────────────────────────────────────────── +if __name__ == "__main__": + parser = argparse.ArgumentParser(description="Sync Notion content calendar to Content360") + parser.add_argument("--dry-run", action="store_true", default=False, + help="Show what would be done without making changes") + parser.add_argument("--platforms", type=str, default="", + help="Comma-separated list: facebook,linkedin,x,instagram,youtube,tiktok,pinterest,reddit") + args = parser.parse_args() + + platforms = [p.strip() for p in args.platforms.split(",") if p.strip()] or None + + if not CONTENT360_API_KEY: + print("❌ CONTENT360_API_KEY environment variable not set.") + sys.exit(1) + + created = sync_notion_to_content360(dry_run=args.dry_run, platforms=platforms) From 7dcedd2a04fe2a6213c6b858b041927ea33c040c Mon Sep 17 00:00:00 2001 From: substrate-bot Date: Wed, 29 Apr 2026 17:52:40 +0000 Subject: [PATCH 2/2] Add huggingface-hub skill: access HF models, datasets, spaces via Python --- Community/huggingface-hub/DISPLAY.json | 1 + Community/huggingface-hub/SKILL.md | 68 ++++++++ Community/huggingface-hub/scripts/hf_hub.py | 168 ++++++++++++++++++++ external.yml | 53 +++--- 4 files changed, 268 insertions(+), 22 deletions(-) create mode 100644 Community/huggingface-hub/DISPLAY.json create mode 100644 Community/huggingface-hub/SKILL.md create mode 100755 Community/huggingface-hub/scripts/hf_hub.py diff --git a/Community/huggingface-hub/DISPLAY.json b/Community/huggingface-hub/DISPLAY.json new file mode 100644 index 0000000..ed97fb4 --- /dev/null +++ b/Community/huggingface-hub/DISPLAY.json @@ -0,0 +1 @@ +{"icon":"download","tags":["huggingface","models","datasets","ai"]} diff --git a/Community/huggingface-hub/SKILL.md b/Community/huggingface-hub/SKILL.md new file mode 100644 index 0000000..c0251b3 --- /dev/null +++ b/Community/huggingface-hub/SKILL.md @@ -0,0 +1,68 @@ +--- +name: huggingface-hub +description: Access Hugging Face Hub models, datasets, and spaces via the huggingface_hub Python library. Use when you need to list, search, download, or upload HF assets. +compatibility: Created for Zo Computer +metadata: + author: jaknyfe.zo.computer +--- + +# Hugging Face Hub Skill + +Interact with Hugging Face Hub — browse models/datasets/spaces, download files, upload artifacts, and more. + +## Prerequisites + +Install the library: +```bash +pip install huggingface_hub +``` + +Save your HF token to [Settings > Advanced](/?t=settings&s=advanced) as `HF_TOKEN` (get one at https://huggingface.co/settings/tokens). + +## Usage + +```bash +python3 /home/workspace/Skills/huggingface-hub/scripts/hf_hub.py [options] +``` + +### Commands + +| Command | Description | +|---------|-------------| +| `list-models` | List models. Options: `--search`, `--sort`, `--direction`, `--limit` | +| `list-datasets` | List datasets. Options: `--search`, `--sort`, `--direction`, `--limit` | +| `list-spaces` | List spaces. Options: `--search`, `--sort`, `--direction`, `--limit` | +| `model-info` | Get model info. Options: `--model` | +| `dataset-info` | Get dataset info. Options: `--dataset` | +| `download-file` | Download a file from a repo. Options: `--repo-id`, `--filename`, `--revision` | +| `upload-file` | Upload a file. Options: `--local-path`, `--repo-id`, `--repo-type`, `--path-in-repo` | +| `whoami` | Show authenticated user info | + +### Examples + +```bash +# Search for text-to-image models +python3 .../hf_hub.py list-models --search "text-to-image" --limit 10 + +# Download a model file +python3 .../hf_hub.py download-file --repo-id "stabilityai/stable-diffusion-xl-base-1.0" --filename "pytorch_model.bin" + +# Upload a file +python3 .../hf_hub.py upload-file --local-path ./model.pt --repo-id "your-username/my-model" --repo-type "model" + +# List top-rated datasets +python3 .../hf_hub.py list-datasets --sort "likes" --direction desc --limit 5 +``` + +## API Reference + +This skill wraps `huggingface_hub`. Key functions: +- `list_models`, `list_datasets`, `list_spaces` — browse hub +- `huggingface_hub` — authentication and file ops +- `InferenceClient` — run inference on hosted models/spaces + +## Notes + +- Downloads cache to `~/.cache/huggingface/` +- `repo-type` values: `model`, `dataset`, `space` +- Default revision is `main` diff --git a/Community/huggingface-hub/scripts/hf_hub.py b/Community/huggingface-hub/scripts/hf_hub.py new file mode 100755 index 0000000..e8c9142 --- /dev/null +++ b/Community/huggingface-hub/scripts/hf_hub.py @@ -0,0 +1,168 @@ +#!/usr/bin/env /usr/local/bin/python3 +"""Hugging Face Hub CLI tool. Usage: python3 hf_hub.py [options]""" + +import argparse +import os +import sys +from pathlib import Path + +try: + from huggingface_hub import ( + HfApi, hf_hub_download, + list_models, list_datasets, list_spaces, + model_info, dataset_info, + ) +except ImportError: + sys.stderr.write("ERROR: huggingface_hub not installed. Run: pip install huggingface_hub\n") + sys.exit(1) + + +def get_token() -> str: + token = os.environ.get("HF_TOKEN", "") + if not token: + try: + with open("/etc/secrets/HF_TOKEN", "r") as f: + token = f.read().strip() + except Exception: + pass + return token + + +def get_api() -> HfApi: + token = get_token() + return HfApi(token=token if token else None) + + +def cmd_whoami(args): + api = get_api() + info = api.whoami() + print(f"Username: {info['name']}") + print(f"Full name: {info.get('fullname', 'N/A')}") + print(f"Email: {info.get('email', 'N/A')}") + print(f"Organizations: {', '.join(info.get('orgs', []))}") + + +def cmd_list_models(args): + kwargs = {"sort": args.sort, "direction": args.direction, "limit": args.limit} + if args.search: + kwargs["search"] = args.search + kwargs = {k: v for k, v in kwargs.items() if v is not None} + for m in list_models(**kwargs): + print(f"{m.id} likes={m.likes}") + + +def cmd_list_datasets(args): + kwargs = {"sort": args.sort, "direction": args.direction, "limit": args.limit} + if args.search: + kwargs["search"] = args.search + kwargs = {k: v for k, v in kwargs.items() if v is not None} + for d in list_datasets(**kwargs): + print(f"{d.id} likes={d.likes}") + + +def cmd_list_spaces(args): + kwargs = {"sort": args.sort, "direction": args.direction, "limit": args.limit} + if args.search: + kwargs["search"] = args.search + kwargs = {k: v for k, v in kwargs.items() if v is not None} + for s in list_spaces(**kwargs): + print(f"{s.id} likes={s.likes}") + + +def cmd_model_info(args): + info = model_info(args.model) + print(f"Model ID: {info.id}") + print(f"Downloads: {getattr(info, 'downloads', 'N/A')}") + print(f"Likes: {info.likes}") + print(f"Tags: {', '.join(info.tags)}") + print(f"Pipeline tag: {getattr(info, 'pipeline_tag', 'N/A')}") + print(f"Created at: {info.created_at}") + print(f"Last modified: {info.last_modified}") + + +def cmd_dataset_info(args): + info = dataset_info(args.dataset) + print(f"Dataset ID: {info.id}") + print(f"Downloads: {getattr(info, 'downloads', 'N/A')}") + print(f"Likes: {info.likes}") + print(f"Tags: {', '.join(info.tags)}") + print(f"Created at: {info.created_at}") + print(f"Last modified: {info.last_modified}") + + +def cmd_download_file(args): + path = hf_hub_download( + repo_id=args.repo_id, + filename=args.filename, + revision=args.revision or "main", + token=get_token() or None, + ) + print(path) + + +def cmd_upload_file(args): + api = get_api() + api.upload_file( + path_or_fileobj=args.local_path, + path_in_repo=args.path_in_repo or Path(args.local_path).name, + repo_id=args.repo_id, + repo_type=args.repo_type or "model", + token=get_token() or None, + ) + print(f"Uploaded {args.local_path} to {args.repo_id}/{args.path_in_repo}") + + +def build_parser(): + parser = argparse.ArgumentParser(description="Hugging Face Hub CLI") + sub = parser.add_subparsers(dest="command") + sub.add_parser("whoami", help="Show authenticated user info") + p = sub.add_parser("list-models", help="List models") + p.add_argument("--search") + p.add_argument("--sort", default=None) + p.add_argument("--direction", default=None) + p.add_argument("--limit", type=int, default=20) + p = sub.add_parser("list-datasets", help="List datasets") + p.add_argument("--search") + p.add_argument("--sort", default=None) + p.add_argument("--direction", default=None) + p.add_argument("--limit", type=int, default=20) + p = sub.add_parser("list-spaces", help="List spaces") + p.add_argument("--search") + p.add_argument("--sort", default=None) + p.add_argument("--direction", default=None) + p.add_argument("--limit", type=int, default=20) + p = sub.add_parser("model-info", help="Get model info") + p.add_argument("--model", required=True) + p = sub.add_parser("dataset-info", help="Get dataset info") + p.add_argument("--dataset", required=True) + p = sub.add_parser("download-file", help="Download a file from a repo") + p.add_argument("--repo-id", required=True) + p.add_argument("--filename", required=True) + p.add_argument("--revision", default=None) + p = sub.add_parser("upload-file", help="Upload a file") + p.add_argument("--local-path", required=True) + p.add_argument("--repo-id", required=True) + p.add_argument("--repo-type", default="model") + p.add_argument("--path-in-repo", default=None) + return parser + + +COMMANDS = { + "whoami": cmd_whoami, + "list-models": cmd_list_models, + "list-datasets": cmd_list_datasets, + "list-spaces": cmd_list_spaces, + "model-info": cmd_model_info, + "dataset-info": cmd_dataset_info, + "download-file": cmd_download_file, + "upload-file": cmd_upload_file, +} + + +if __name__ == "__main__": + parser = build_parser() + args = parser.parse_args() + if args.command is None: + parser.print_help() + sys.exit(1) + COMMANDS[args.command](args) diff --git a/external.yml b/external.yml index cf24dc6..d2b2c52 100644 --- a/external.yml +++ b/external.yml @@ -3,21 +3,16 @@ description: Post, reply, and engage on Moltbook, a social network for AI agents. - repository: clawdbot/clawdbot skill: gog - description: Google Workspace CLI for Gmail, Calendar, Drive, Contacts, Sheets, and Docs + description: Google Workspace CLI for Gmail, Calendar, Drive, Contacts, Sheets, + and Docs compatibility: Requires zo-google-direct-oauth skill with valid tokens at /home/.z/google-oauth/ - notice: | - This skill can reuse OAuth credentials from `zo-google-direct-oauth` at `/home/.z/google-oauth/`. - Use these steps to avoid creating a second OAuth app: - - ### Setup (reuse existing OAuth app) - 1. Ensure `zo-google-direct-oauth` is already set up. - 2. Run: - - `gog auth credentials /home/.z/google-oauth/client_secret.json` - - `gog auth add you@gmail.com --services gmail,calendar,drive,contacts,docs,sheets` - 3. Verify: - - `gog auth list` - - IMPORTANT: If you generate a new OAuth app here, it may replace or invalidate existing tokens. + notice: "This skill can reuse OAuth credentials from `zo-google-direct-oauth` at\ + \ `/home/.z/google-oauth/`.\nUse these steps to avoid creating a second OAuth\ + \ app:\n\n### Setup (reuse existing OAuth app)\n1. Ensure `zo-google-direct-oauth`\ + \ is already set up.\n2. Run:\n - `gog auth credentials /home/.z/google-oauth/client_secret.json`\n\ + \ - `gog auth add you@gmail.com --services gmail,calendar,drive,contacts,docs,sheets`\n\ + 3. Verify:\n - `gog auth list`\n\nIMPORTANT: If you generate a new OAuth app\ + \ here, it may replace or invalidate existing tokens.\n" - repository: clawdbot/clawdbot skill: trello description: Work with Trello using your own Trello API key (free) @@ -30,9 +25,13 @@ description: Work with Notion using your own Notion API key (free) - repository: clawdbot/clawdbot skill: tmux - notice: | - Zo note: the docs mention Clawdbot-specific socket env vars. On Zo, you can still set - `CLAWDBOT_TMUX_SOCKET_DIR` to any writable path (e.g. `/tmp/zo-tmux-sockets`) or omit it. + notice: 'Zo note: the docs mention Clawdbot-specific socket env vars. On Zo, you + can still set + + `CLAWDBOT_TMUX_SOCKET_DIR` to any writable path (e.g. `/tmp/zo-tmux-sockets`) + or omit it. + + ' - repository: clawdbot/clawdbot skill: weather description: Get the current weather for any location @@ -46,13 +45,18 @@ skill: video-frames - repository: clawdbot/skills skill: lastfm - notice: | - Zo note: set `LASTFM_API_KEY` and `LASTFM_USER` in your shell (or source a local env file) - before using this skill. The `~/.clawdbot/.env` path in the docs is not required on Zo. + notice: 'Zo note: set `LASTFM_API_KEY` and `LASTFM_USER` in your shell (or source + a local env file) + + before using this skill. The `~/.clawdbot/.env` path in the docs is not required + on Zo. + + ' - repository: clawdbot/skills skill: shorten - notice: | - On Zo, the script lives at `/home/workspace/Skills/shorten/shorten.sh`. + notice: 'On Zo, the script lives at `/home/workspace/Skills/shorten/shorten.sh`. + + ' - repository: clawdbot/skills skill: just-fucking-cancel - repository: adithya-s-k/manim_skill @@ -165,3 +169,8 @@ skill: signup-flow-cro - repository: coreyhaines31/marketingskills skill: social-content +- repository: Noesis-Boss/skills-huggingface-hub + skill: huggingface-hub + description: Access Hugging Face Hub models, datasets, and spaces via the huggingface_hub + Python library. Use when you need to list, search, download, or upload HF assets. +