Compare commits

25 Commits
v0.0.1 ... main

Author SHA1 Message Date
5e7de2f62c Update README.md 2025-08-12 01:33:30 +00:00
d473b05d89 Restore README.md from commit 2c08fce 2025-08-11 18:18:03 -07:00
3eeaa32759 remove accidental wiki sub-module; keep llgit wiki standalone 2025-08-11 18:12:19 -07:00
01ac83a342 Restore README.md to version 2c08fce88f 2025-08-11 18:07:04 -07:00
75055dca56 Update README.md 2025-08-12 00:54:14 +00:00
90c5d6992e Update README.md 2025-08-12 00:48:01 +00:00
14500c479e Merge branch 'main' of https://github.com/LCS-Gramps/video-pipeline 2025-08-05 22:24:16 -07:00
41f0ddb0c1 🧹 Removed legacy wiki subfolder after migration to GitHub-hosted wiki 2025-08-05 22:23:27 -07:00
2c08fce88f Update README.md 2025-08-05 22:17:31 -07:00
73666d3987 🪂 Deprecate sync_wiki.py and transition to in-memory GitHub wiki publishing
sync_wiki.py is now deprecated in favor of token-authenticated in-memory
publishing directly to the GitHub wiki. This change reflects a shift to
e2e automation. The file remains as a fallback parachute.

Also includes minor updates to .gitignore and wiki repo pointer.
2025-08-04 20:22:07 -07:00
ed85fba609 📚 Re-added wiki as a proper Git submodule 2025-08-04 18:39:29 -07:00
0a7387447c Update yt_poster.py: [describe your changes briefly] 2025-07-28 18:18:00 -07:00
96ca63c299 🔧 Sanity check: test commit from assistant 2025-07-28 17:31:20 -07:00
087e96b7e6 📚 Update Home.md with project overview and index 2025-07-27 20:34:02 -07:00
ce19fce7f7 🧪 Test: trigger wiki sync hook from post-commit.bat 2025-07-27 20:17:16 -07:00
e9fc694970 🧠 Metadata finalization: integrated clip/session merge, persistent archive, and sequential title suffixing
- Merged clip-level and session-level metadata into a unified object
- Stored notes.json inline and as a child field for structured access
- Implemented local NoSQL-style history archive for uploaded videos
- Added YouTube/PeerTube URL arrays to metadata post-upload
- Ensured sequential titling for multiple sessions on the same day
- Removed source folder after upload when DEBUG == False
2025-07-27 20:15:09 -07:00
ea62fa34d0 Finalized YouTube upload flow: metadata archiving, title suffixing, and cleanup
- Integrated save_metadata_record() after successful upload
- Titles now reflect sequential numbering for same-day sessions
- Automatically uploads thumbnail for widescreen videos
- Removes session folder after upload if DEBUG == False
- Merges session + clip metadata for persistence
- Handles fallback logic for missing notes
2025-07-26 09:06:31 -07:00
652061b914 🧠 Integrate notes.txt for dynamic thumbnail prompt generation
- Added generate_thumbnail_prompt(notes) to thumbnail_utils.py
- yt_poster.py now reads notes.txt (if present) from the clip directory
- Uses content to generate brand-aligned OpenAI prompt for future AI thumbnail generation
- Fallback: generic description if notes.txt is missing
- Prompt is printed for now; image generation hook is pending
2025-07-25 19:37:19 -07:00
e4d4a6c15f 🧠 Integrate notes.txt for dynamic thumbnail prompt generation
- Added generate_thumbnail_prompt(notes) to thumbnail_utils.py
- yt_poster.py now reads notes.txt (if present) from the clip directory
- Uses content to generate brand-aligned OpenAI prompt for future AI thumbnail generation
- Fallback: generic description if notes.txt is missing
- Prompt is printed for now; image generation hook is pending
2025-07-25 19:32:11 -07:00
22a51dc7ae 🐛 Fix YouTube thumbnail upload ordering
- Moved generate_thumbnail() and thumbnails().set() to run after video upload
- Now waits for video_id before attempting thumbnail upload
- Applies only to widescreen videos (Shorts still skip thumbnails)
2025-07-25 19:15:49 -07:00
0c8e8f2661 🖼️ Add thumbnail generation to YouTube upload (widescreen only)
- Integrated generate_thumbnail() into upload_video()
- Attempted to upload custom thumbnail via thumbnails().set()
- Applies only to widescreen videos (not vertical Shorts)
- NOTE: Bug present — tries to upload thumbnail before video_id is available
- Will fix by reordering after upload in next commit
2025-07-25 19:13:47 -07:00
961c43fbd5 🛠️ Initial YouTube upload and description generation: OAuth, montage flow, thumbnail setup, env config. Work in progress by gramps@llamachile.shop 2025-07-24 19:34:47 -07:00
2b8bf4ce19 Create FUNDING.yml 2025-07-23 20:54:39 -07:00
2f6740eb54 Initial YouTube description generation and authentication — work in progress 2025-07-23 20:28:20 -07:00
6c5850b1aa 🔧 v0.1.1 prep: staging upload + OpenAI integration code 2025-07-23 18:29:47 -07:00
24 changed files with 821 additions and 112 deletions

15
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,15 @@
# These are supported funding model platforms
github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: llamachileshop
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
buy_me_a_coffee: # Replace with a single Buy Me a Coffee username
thanks_dev: # Replace with a single thanks.dev username
merch: 'https://llamachile.support'

65
.gitignore vendored
View File

@ -1,44 +1,65 @@
# Byte-compiled / cache
# Python
__pycache__/
*.py[cod]
*.pyo
*.pyd
*.so
# Virtual environment
# Virtual environments
.venv/
env/
venv/
ENV/
# VS Code settings
# test data
2025.06.20/
# VSCode
.vscode/
# OS files
.DS_Store
Thumbs.db
# Tokens and API keys
# Environment variables and secrets
.env
client_secrets.json
token.pickle
token.zip
token (2).zip
# Build artifacts
*.mp4
*.mov
*.mp3
*.zip
*.odt
description_gen.py
# Logs
logs/
*.log
logs/
# Assets not for versioning
assets/*.mp4
assets/*.mp3
assets/*.png
assets/*.otf
# Jupyter Notebook checkpoints
.ipynb_checkpoints/
# Processed data
202*/**/rendered/
202*/**/*.mp4
# Compiled C extensions
*.c
*.o
*.obj
*.dll
*.a
*.lib
*.exp
*.pdb
# Test and coverage
.coverage
.tox/
.nox/
.cache/
pytest_cache/
htmlcov/
# Distribution / packaging
build/
dist/
*.egg-info/
.eggs/
MANIFEST
# Misc
*.bak
*.swp
*.tmp

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "video-pipeline.wiki"]
path = video-pipeline.wiki
url = https://github.com/LCS-Gramps/video-pipeline.wiki.git

View File

@ -1,19 +1,83 @@
# Llama Chile Shop Video Automation Pipeline
# 🎥 LCS Pipeline
This project automates the rendering, branding, and publishing of Fortnite gameplay clips for YouTube and PeerTube.
Automated livestream highlight rendering and publishing for Fortnite content featuring Gramps.
## Features
This project powers the backend of [Llama Chile Shop](https://www.youtube.com/@llamachileshop), transforming raw livestream clips into polished, uploaded videos — complete with titles, thumbnails, intros/outros, and social metadata.
- Auto-detection of new stream folders
- Dynamic title card overlay
- Automated rendering and social post generation
- Vertical & widescreen output
---
## Setup
## ⚙️ Features
1. Clone the repo.
2. Create a `.env` file (see `ENVIRONMENT.md` for required keys).
3. Install dependencies:
* ✅ Daily folder scan for new stream sessions (20250710) \[`v0.1.0`]
* 📂 Clip classification (`hits/`, `misses/`, `montages/`, `outtakes/`, `timelapses/`) (20250807) \[`v0.1.2`]
* 🧠 AIgenerated titles and descriptions via OpenAI (20250710) \[`v0.1.0`]
* 🎬 Autostitched intro + title card + outro (20250723) \[`v0.1.0`]
* 🖼️ Dynamic thumbnail creation with Fortnite styling (20250725) \[`v0.1.0`]
* ⬆️ Uploads to YouTube (20250729) and PeerTube (20250807) \[`v0.1.1` & `v0.1.2`]
* 📜 Metadata archive and session history (20250726) \[`v0.1.0`]
* 🐘 (Planned) Social posts to Mastodon and Bluesky (20250720) \[`v0.2.0`]
---
## 🚀 Quick Start
```bash
git clone https://llgit.llamachile.tube/gramps/video-pipeline.git
cd video-pipeline
pip install -r requirements.txt
cp .env.example .env # Fill in your API keys and config
python main.py
```
> Requires Python3.13+ and access to mapped NAS directory (e.g. `Z:\2025.08.05\hits\`).
---
## 📁 Folder Structure
```
video-pipeline/
├── main.py
├── config.py
├── .env.example
├── modules/
│ ├── render_engine.py
│ ├── title_utils.py
│ ├── thumbnail_utils.py
│ ├── yt_poster.py
│ └── ...
├── assets/ # Branding assets (intros, fonts, logos)
├── logs/ # Sync logs, wiki publish logs, etc.
└── metadata/
└── history/ # Per-clip metadata archive
```
---
## 📚 Documentation
Full documentation is hosted in the [📖 Gitea Wiki](https://llgit.llamachile.tube/gramps/video-pipeline/wiki)
Recommended pages:
* 🏠 [Home](https://llgit.llamachile.tube/gramps/video-pipeline/wiki)
* 🎯 [Clip Handling Logic](https://llgit.llamachile.tube/gramps/video-pipeline/wiki/Clip-Handling-Logic)
* 🗃️ [Metadata Extraction](https://llgit.llamachile.tube/gramps/video-pipeline/wiki/Metadata-Extraction)
* 📺 [YouTube Upload Logic](https://llgit.llamachile.tube/gramps/video-pipeline/wiki/YouTube-Upload-Logic)
---
## 🛠️ Development Mode
* `DEBUG=True` in `.env` disables destructive operations
* All modules can be run/tested independently
* Wiki editing is supported via local Markdown and `wiki_publish.log`
---
## 👤 About
Created by Gramps for Llama Chile Shop — a custom content pipeline for old-school gaming chaos.
> Maintainer: `gramps@llamachile.shop`
> Contributions welcome in the form of bug reports, pull requests, or Fortnite gifts.

Binary file not shown.

View File

@ -1,25 +1,61 @@
"""
authorize_youtube.py
Handles OAuth2 authorization for the YouTube Data API.
This module loads the client_secrets.json file and generates an authorized
YouTube API service object for use by other modules. The token is cached
in token.pickle to avoid repeated authorization.
Author: gramps@llamachile.shop
"""
import os
import sys
import pickle
from pathlib import Path
# Automatically locate this file's directory (e.g., \\chong\LCS\Videos\eklipse)
project_root = os.path.dirname(os.path.abspath(__file__))
modules_dir = os.path.join(project_root, "modules")
from google.auth.transport.requests import Request
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
# Add modules directory to the Python path
sys.path.insert(0, modules_dir)
# Scopes define what access is requested from the YouTube API
SCOPES = ["https://www.googleapis.com/auth/youtube.upload"]
# Change working directory so relative paths (like client_secrets.json) resolve
os.chdir(modules_dir)
# Default token and client secret filenames
TOKEN_PATH = "token.pickle"
CLIENT_SECRET_FILE = "client_secrets.json"
# Import from yt_poster in modules
from yt_poster import authenticate_youtube
# Run the OAuth flow
def get_authenticated_service():
"""
Returns an authorized YouTube API client.
If the token does not exist or is expired, initiates the OAuth flow.
Requires client_secrets.json in project root.
Returns:
googleapiclient.discovery.Resource: Authenticated YouTube service
"""
creds = None
# Check if token.pickle exists
if Path(TOKEN_PATH).exists():
with open(TOKEN_PATH, "rb") as token:
creds = pickle.load(token)
# If no valid creds, go through OAuth flow
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
print("🔐 Starting YouTube OAuth authorization...")
if not Path(CLIENT_SECRET_FILE).exists():
raise FileNotFoundError(f"Missing required file: {CLIENT_SECRET_FILE}")
flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRET_FILE, SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for future use
with open(TOKEN_PATH, "wb") as token:
pickle.dump(creds, token)
try:
service = authenticate_youtube()
print("✅ YouTube authorization complete.")
except Exception as e:
print(f"❌ Authorization failed: {e}")
return build("youtube", "v3", credentials=creds)

3
docs/wiki/Home.md Normal file
View File

@ -0,0 +1,3 @@
# Home
_TODO: Add content here._

2
docs/wiki/TestSync.md Normal file
View File

@ -0,0 +1,2 @@
Testing wiki sync trigger @ 07/27/2025 20:16:57

View File

@ -5,7 +5,8 @@ from dotenv import load_dotenv
load_dotenv()
# debugging flag
DEBUG = True
DEBUG = os.getenv("DEBUG_MODE", "false").lower() == "true"
# 🔧 Project Root
PROJECT_ROOT = Path(__file__).resolve().parent.parent

View File

@ -0,0 +1,67 @@
"""
description_utils.py
Utility functions for generating video descriptions dynamically using OpenAI's API.
Includes brand-aware humor, format-aware descriptions, and dynamic prompt generation.
This module currently supports:
- Montage descriptions (fun, quirky, "Cool-Hand Gramps" themed)
Author: Llama Chile Shop
Created: 2025-07-22
"""
import os
import random
import openai
# 🛠 Global debug flag (imported by design elsewhere)
from modules.config import DEBUG
# Set up OpenAI API key from environment
openai.api_key = os.getenv("OPENAI_API_KEY")
def generate_montage_description() -> str:
"""
Generates a creative, humorous description for a montage highlight video.
Leverages the "Cool-Hand Gramps" branding identity and inserts dynamic randomness
to keep each description fresh and engaging.
Returns:
str: A YouTube/PeerTube-ready video description.
"""
# 🎲 Add entropy to reduce prompt caching / same-seed behavior
creativity_seed = random.randint(0, 999999)
# 🧠 Base template for the prompt
prompt = f"""
You are a branding-savvy copywriter helping a YouTube gaming channel called "Llama Chile Shop"
run by a quirky and beloved senior gamer named "Gramps." Gramps is known for his calm demeanor,
sharp shooting, and whacky senile playstyle in Solo Zero Build Fortnite matches. His fans refer
to him as "Cool-Hand Gramps" because his heart rate doesnt rise, even in intense firefights.
Write a YouTube/PeerTube video description for a highlight montage from one of Gramps' livestreams.
Make it short, funny, and on-brand. Include emoticons and hashtags. Add a sentence encouraging viewers
to subscribe and check out the stream calendar.
Entropy seed: {creativity_seed}
"""
try:
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a creative and humorous copywriter."},
{"role": "user", "content": prompt}
],
temperature=0.9,
max_tokens=250
)
return response.choices[0].message.content.strip()
except Exception as e:
fallback = "Join Gramps for another action-packed Fortnite montage! Subscribe and watch live ➡ https://youtube.com/@llamachileshop 🎮🦙 #Fortnite #CoolHandGramps"
if DEBUG:
print(f"[ERROR] Failed to generate montage description: {e}")
return fallback

121
modules/metadata_utils.py Normal file
View File

@ -0,0 +1,121 @@
"""
metadata_utils.py
Handles metadata extraction from video clip structure and notes.json,
and manages persistent storage of finalized metadata records.
Author: Llama Chile Shop
"""
import json
import re
from pathlib import Path
from modules.config import NAS_MOUNT_ROOT
# Define where to persist finalized metadata records after upload
HISTORY_DIR = Path("Z:/LCS/Logs/processed")
def derive_session_metadata(session_dir: Path) -> dict:
"""
Derives session-level metadata from a session directory.
Includes shared attributes, notes.json contents, and clip metadata for all videos found.
Args:
session_dir (Path): Path to the session folder (e.g., 2025.07.24 or 2025.07.24.2)
Returns:
dict: A dictionary representing session metadata, including notes and per-clip info.
"""
session_dir = Path(session_dir)
session_name = session_dir.name
# Validate session folder format: YYYY.MM.DD or YYYY.MM.DD.N
match = re.match(r"(\d{4})\.(\d{2})\.(\d{2})(?:\.(\d+))?", session_name)
if not match:
raise ValueError(f"Invalid session folder format: {session_name}")
year, month, day, session_index = match.groups()
session_date = f"{year}-{month}-{day}"
session_number = int(session_index) if session_index else 1
# Attempt to load notes.json from the session root
notes_path = session_dir / "notes.json"
notes_data = {}
if notes_path.exists():
try:
with open(notes_path, "r", encoding="utf-8") as f:
notes_data = json.load(f)
except Exception as e:
raise RuntimeError(f"Failed to parse notes.json: {e}")
# Extract shared fields (with fallback defaults)
session_meta = {
"session_date": session_date,
"session_number": session_number,
"highlight": notes_data.get("highlight", "Fortnite highlight moment"),
"tags": notes_data.get("tags", []),
"gag_name": notes_data.get("gag_name", None),
"notes": notes_data,
"clips": []
}
# Scan for all .mp4 clips within expected subdirectories
for subfolder in ["hits", "misses", "montages", "outtakes"]:
clip_dir = session_dir / subfolder
if not clip_dir.exists():
continue
for clip_path in clip_dir.glob("*.mp4"):
stem = clip_path.stem.lower()
is_vertical = stem.endswith("-vert") or stem.endswith("-vertical")
format = "vertical" if is_vertical else "wide"
clip_meta = {
"path": str(clip_path),
"filename": clip_path.name,
"stem": clip_path.stem,
"format": format,
"clip_type": subfolder,
"youtube_urls": [],
"peertube_urls": []
}
session_meta["clips"].append(clip_meta)
return session_meta
def save_metadata_record(metadata: dict) -> None:
"""
Saves a finalized metadata record to disk for future lookup or audit.
This includes all session-level and clip-level data, plus any added URLs
after upload to YouTube or PeerTube.
Args:
metadata (dict): Fully populated metadata record, typically post-upload.
Raises:
RuntimeError: If required fields are missing or write fails.
"""
try:
session_date = metadata.get("session_date")
filename = metadata.get("filename") or metadata.get("stem")
if not session_date or not filename:
raise ValueError("Metadata missing required fields: session_date or filename/stem")
# Use YYYY.MM.DD folder for archival
dest_dir = HISTORY_DIR / session_date.replace("-", ".")
dest_dir.mkdir(parents=True, exist_ok=True)
# Save as <stem>.json
dest_file = dest_dir / f"{Path(filename).stem}.json"
with open(dest_file, "w", encoding="utf-8") as f:
json.dump(metadata, f, indent=2)
print(f"📁 Saved metadata record to: {dest_file}")
except Exception as e:
raise RuntimeError(f"Failed to save metadata record: {e}")

View File

@ -0,0 +1,64 @@
import subprocess
import os
from pathlib import Path
def generate_thumbnail(video_path: str, output_path: str) -> str:
"""
Generate a thumbnail image from the midpoint of the given video.
Parameters:
video_path (str): Path to the input video file.
output_path (str): Path where the thumbnail image (JPEG) should be saved.
Returns:
str: Path to the generated thumbnail image.
Notes:
- Uses FFmpeg to extract a frame using the 'thumbnail' filter.
- Thumbnail will be scaled to 1280x720 resolution (16:9).
- Overwrites the output file if it already exists.
"""
video_path = Path(video_path)
output_path = Path(output_path)
if not video_path.exists():
raise FileNotFoundError(f"Video file not found: {video_path}")
output_path.parent.mkdir(parents=True, exist_ok=True)
cmd = [
"ffmpeg", "-y", # Overwrite output if exists
"-i", str(video_path),
"-vf", "thumbnail,scale=1280:720",
"-frames:v", "1",
str(output_path)
]
try:
subprocess.run(cmd, check=True)
except subprocess.CalledProcessError as e:
raise RuntimeError(f"Failed to generate thumbnail: {e}") from e
if not output_path.exists():
raise RuntimeError(f"Thumbnail was not created: {output_path}")
return str(output_path)
def generate_thumbnail_prompt(notes: str) -> str:
"""
Generate a rich thumbnail prompt from a descriptive sentence.
Args:
notes (str): A brief sentence describing the video content.
Returns:
str: A thumbnail generation prompt for OpenAI or DALL·E.
"""
return (
f"Create a Fortnite-style gaming thumbnail based on the moment: \"{notes.strip()}\" "
f"featuring a stylized llama character with bold comic-style colors. Include dramatic or humorous elements "
f"(e.g., explosions, dance emotes, intense lighting), and text like 'HIGHLIGHT' or 'VICTORY ROYALE'. "
f"Use the Llama Chile Shop color palette (f7338f, 10abba, 1c0c38). The vibe should be fun, exaggerated, "
f"and chill — inviting viewers to laugh and enjoy the moment."
)

View File

@ -111,5 +111,5 @@ def generate_montage_title(session_name: str) -> str:
parts = session_name.split(".")
year, month, day = map(int, parts[:3])
suffix = f" Video {parts[3]}" if len(parts) > 3 else ""
date_str = datetime(year, month, day).strftime("%B %-d, %Y")
date_str = datetime(year, month, day).strftime("%B %d, %Y").replace(" 0", " ")
return f"#Fortnite #Solo #Zerobuild #Highlights with Gramps from {date_str}{suffix}"

View File

@ -1,82 +1,84 @@
import os
import pickle, logging
from pathlib import Path
from datetime import datetime
#!/usr/bin/env python3
"""
yt_poster.py
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
This module handles the upload of videos to YouTube using the YouTube Data API v3.
It supports setting metadata such as title, description, tags, category, and privacy settings.
It also ensures that the game title "Fortnite" is included in the metadata to trigger proper categorization.
Author: gramps@llamachile.shop
"""
import os
import google.auth
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from modules.title_utils import get_output_filename, generate_montage_title
# Define OAuth scopes and token paths
SCOPES = ["https://www.googleapis.com/auth/youtube.upload"]
TOKEN_PATH = Path("token.pickle")
CLIENT_SECRETS_FILE = Path("client_secrets.json")
from modules.config import OPENAI_API_KEY, DEBUG
from modules.archive import save_metadata_record
def authenticate_youtube():
"""Handles YouTube OAuth flow and returns a service client."""
creds = None
# Category ID for "Gaming" on YouTube (required for accurate categorization)
CATEGORY_ID = "20"
if TOKEN_PATH.exists():
with open(TOKEN_PATH, "rb") as token_file:
creds = pickle.load(token_file)
# Default tags to include if none are provided
DEFAULT_TAGS = [
"Fortnite", "Zero Build", "Gramps", "CoolHandGramps",
"funny", "gaming", "highlights"
]
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
if not CLIENT_SECRETS_FILE.exists():
raise FileNotFoundError("client_secrets.json not found.")
flow = InstalledAppFlow.from_client_secrets_file(
str(CLIENT_SECRETS_FILE), SCOPES
)
creds = flow.run_local_server(port=0)
with open(TOKEN_PATH, "wb") as token_file:
pickle.dump(creds, token_file)
# Default visibility setting
DEFAULT_PRIVACY = "public"
return build("youtube", "v3", credentials=creds)
def ensure_fortnite_tag(metadata):
"""
Ensures that the word 'Fortnite' appears in at least one of the following:
- Title
- Description
- Tags list
def generate_description(clip_path: Path, stream_date: datetime, is_montage: bool = False) -> str:
"""Creates a dynamic and fun YouTube description."""
kill_count_guess = sum(word.isdigit() for word in clip_path.stem.split())
date_str = stream_date.strftime("%B %d, %Y")
This helps YouTube automatically detect the game and associate the video
with Fortnite gameplay.
"""
if "fortnite" not in metadata["title"].lower() and \
"fortnite" not in metadata["description"].lower() and \
not any("fortnite" in tag.lower() for tag in metadata.get("tags", [])):
metadata.setdefault("tags", []).append("Fortnite")
intro = "Gramps is back in Fortnite with another spicy highlight! 🦥"
if is_montage:
body = (
f"This reel features an outrageous compilation of top plays from our {date_str} stream.\n"
f"{kill_count_guess} eliminations of stupendous magnitude that must be seen to be believed!"
)
else:
body = (
f"Recorded live on {date_str}, this clip captures one of many wild moments "
"from the battlefield. Grab your popcorn. 🎮"
)
def upload_video(youtube, video_path, metadata):
"""
Uploads a video to YouTube with the provided metadata.
hashtags = "#Fortnite #Gaming #SeniorGamer #LlamaChileShop #EpicMoments"
Args:
youtube: Authenticated YouTube API service object.
video_path: Path to the video file to be uploaded.
metadata: Dictionary containing video metadata fields.
return f"{intro}\n\n{body}\n\nSubscribe for more: https://youtube.com/@llamachileshop\n{hashtags}"
Returns:
str: URL of the uploaded YouTube video.
"""
def upload_to_youtube(video_path: Path, title: str, description: str, is_short: bool = False) -> str:
"""Uploads the video to YouTube and returns the video URL."""
youtube = authenticate_youtube()
# Ensure the 'Fortnite' keyword is present somewhere in metadata
ensure_fortnite_tag(metadata)
# Construct the request body for YouTube API
request_body = {
"snippet": {
"title": title,
"description": description,
"tags": ["Fortnite", "Gaming", "Senior Gamer", "LlamaChileShop"],
"categoryId": "20", # Gaming
"title": metadata["title"],
"description": metadata["description"],
"tags": metadata.get("tags", DEFAULT_TAGS),
"categoryId": CATEGORY_ID # Set to "Gaming"
},
"status": {
"privacyStatus": "private",
"selfDeclaredMadeForKids": False,
"privacyStatus": metadata.get("privacy", DEFAULT_PRIVACY)
}
}
media = MediaFileUpload(str(video_path), mimetype="video/mp4", resumable=True)
# Wrap the video file in a MediaFileUpload object
media = MediaFileUpload(video_path, mimetype="video/*", resumable=True)
print(f"📤 Uploading {video_path} to YouTube...")
# Execute the video insert request
request = youtube.videos().insert(
part="snippet,status",
body=request_body,
@ -85,4 +87,29 @@ def upload_to_youtube(video_path: Path, title: str, description: str, is_short:
response = request.execute()
video_id = response["id"]
return f"https://youtu.be/{video_id}"
youtube_url = f"https://www.youtube.com/watch?v={video_id}"
print(f"✅ Uploaded to YouTube: {youtube_url}")
# Record the YouTube URL in the metadata for archive history
metadata.setdefault("youtube_url", []).append(youtube_url)
# Persist the metadata archive only if we're not in DEBUG mode
if not DEBUG:
save_metadata_record(video_path, metadata)
return youtube_url
def get_authenticated_service():
"""
Returns an authenticated YouTube API service using Application Default Credentials.
This requires that `gcloud auth application-default login` has been run successfully,
or that a service account token is available in the environment.
Returns:
googleapiclient.discovery.Resource: The YouTube API client object.
"""
credentials, _ = google.auth.default(
scopes=["https://www.googleapis.com/auth/youtube.upload"]
)
return build("youtube", "v3", credentials=credentials)

BIN
sanity_check.md Normal file

Binary file not shown.

106
sync_wiki.py Normal file
View File

@ -0,0 +1,106 @@
# sync_wiki.py
"""
🚨 DEPRECATED: This script was used to manually sync wiki pages via local `.md` files.
It is now kept as a fallback ('parachute') in case automated token-based publishing fails.
✅ DO NOT use this unless instructed.
"""
# This entire file is now considered inactive and will not be maintained unless token publishing breaks.
# All real wiki publishing is handled via automated memory-based GPT-side tools.
import os
import subprocess
import requests
from datetime import datetime
WIKI_DIR = "video-pipeline.wiki"
LOG_FILE = "logs/wiki_publish.log"
GITHUB_REPO = "LCS-Gramps/video-pipeline"
WIKI_BASE_URL = f"https://github.com/{GITHUB_REPO}/wiki"
def log_result(filename, success):
os.makedirs(os.path.dirname(LOG_FILE), exist_ok=True)
with open(LOG_FILE, "a", encoding="utf-8") as log:
status = "" if success else ""
timestamp = datetime.now().isoformat(timespec='seconds')
log.write(f"{timestamp} {status} {filename}\n")
def commit_and_push():
# Explicitly list and add all .md files
md_files = [f for f in os.listdir(WIKI_DIR) if f.endswith(".md")]
if not md_files:
print("⚠️ No markdown files found to commit.")
return
try:
for f in md_files:
subprocess.run(["git", "add", f], cwd=WIKI_DIR, check=True)
result = subprocess.run(
["git", "commit", "-m", "📚 Sync updated wiki pages from docs/wiki"],
cwd=WIKI_DIR,
capture_output=True,
text=True
)
if "nothing to commit" in result.stdout.lower():
print("⚠️ Nothing to commit.")
return
print(result.stdout.strip())
except subprocess.CalledProcessError as e:
print("❌ Git add/commit failed:", e)
return
subprocess.run(["git", "push", "origin", "master"], cwd=WIKI_DIR, check=True)
def verify_publish():
for file in os.listdir(WIKI_DIR):
if file.endswith(".md"):
name = file.replace(".md", "").replace(" ", "-")
url = f"{WIKI_BASE_URL}/{name}"
try:
response = requests.get(url)
success = response.status_code == 200
except Exception:
success = False
log_result(file, success)
print(f"{'' if success else ''} {url}")
def main():
print("📝 Auto-generating wiki content...")
os.makedirs(WIKI_DIR, exist_ok=True)
autogen_content = {
"Architecture-Overview.md": """# Architecture Overview
This page provides an overview of the internal structure of the LCS Pipeline.
## Modules
- `main.py`: Central orchestration logic
- `modules/`: Reusable utilities for title cards, thumbnails, uploads
- `assets/`: Contains branding videos and fonts
## Flow
1. Detect new video sessions
2. Generate metadata, titles, overlays
3. Render videos with intro/title/outro
4. Upload to YouTube and optionally PeerTube
5. Auto-publish wiki and social metadata
"""
}
# Only create or update files explicitly listed
for filename, content in autogen_content.items():
filepath = os.path.join(WIKI_DIR, filename)
with open(filepath, "w", encoding="utf-8") as f:
f.write(content.strip())
print(f"✅ Created or updated {filename}")
commit_and_push()
verify_publish()
if __name__ == "__main__":
main()

0
tests/__init__.py Normal file
View File

20
tests/conftest.py Normal file
View File

@ -0,0 +1,20 @@
# tests/conftest.py
"""
Shared pytest fixtures and constants for testing the LCS video pipeline.
"""
import pytest
from pathlib import Path
@pytest.fixture(scope="session")
def test_session_path() -> Path:
"""
Fixture providing the fixed test session directory.
NOTE: This directory must exist and be preserved. It contains test clips
and notes.json used by multiple tests.
Returns:
Path: Absolute path to test session folder.
"""
return Path("Z:/LCS/Videos/eklipse/2025.07.25.9")

44
tests/sync_wiki.py Normal file
View File

@ -0,0 +1,44 @@
#!/usr/bin/env python3
"""
sync_wiki.py
Synchronizes local markdown files in docs/wiki/ to the GitHub wiki
for the Llama Chile Shop video pipeline project.
Requires the GitHub wiki repo to be cloned into ./video-pipeline.wiki/.
Author: gramps@llamachile.shop
"""
import os
import shutil
import subprocess
from pathlib import Path
print("🧠 THIS IS THE CORRECT sync_wiki.py")
# Correct paths for wiki sync
LOCAL_WIKI_SOURCE = Path("docs/wiki")
LOCAL_WIKI_REPO = Path("video-pipeline.wiki")
print("🔍 Executing: sync_wiki.py from", __file__)
def sync_wiki():
if not LOCAL_WIKI_REPO.exists():
print("❌ Wiki repo not found. Clone it using:")
print(" git clone https://github.com/LCS-Gramps/video-pipeline.wiki.git")
return
# Copy .md files to the local wiki repo
for md_file in LOCAL_WIKI_SOURCE.glob("*.md"):
target = LOCAL_WIKI_REPO / md_file.name
shutil.copy2(md_file, target)
print(f"✅ Synced: {md_file.name}")
# Commit and push changes
os.chdir(LOCAL_WIKI_REPO)
subprocess.run(["git", "add", "."], check=True)
subprocess.run(["git", "commit", "-m", "📚 Sync updated wiki pages from docs/wiki"], check=True)
subprocess.run(["git", "push"], check=True)
print("🚀 Wiki updated successfully.")
if __name__ == "__main__":
sync_wiki()

View File

View File

@ -0,0 +1,52 @@
# tests/test_metadata_utils.py
"""
Unit tests for metadata parsing and archiving functions.
"""
from modules.metadata_utils import derive_session_metadata, save_metadata_record
from pathlib import Path
import json
def test_derive_session_metadata_structure(test_session_path):
"""
Validates that metadata is parsed correctly and includes expected keys.
"""
metadata = derive_session_metadata(test_session_path)
assert "session_date" in metadata
assert "clips" in metadata
assert isinstance(metadata["clips"], list)
assert len(metadata["clips"]) > 0, "Expected at least one clip in metadata"
for clip in metadata["clips"]:
assert "stem" in clip
assert "highlight" in clip or "notes" in clip
assert clip["format"] in ("wide", "vertical")
def test_save_metadata_record_creates_file(tmp_path):
"""
Ensures metadata is saved to a properly named JSON file.
"""
fake_record = {
"session_date": "2025-07-25",
"stem": "test-clip",
"youtube_urls": ["https://youtu.be/test123"],
"peertube_urls": [],
}
# Override history dir to a temp path
from modules import metadata_utils
metadata_utils.HISTORY_DIR = tmp_path
save_metadata_record(fake_record)
expected_dir = tmp_path / "2025.07.25"
expected_file = expected_dir / "test-clip.json"
assert expected_file.exists(), f"Expected {expected_file} to be created"
with expected_file.open("r", encoding="utf-8") as f:
data = json.load(f)
assert data["youtube_urls"][0] == "https://youtu.be/test123"

View File

0
tests/test_yt_poster.py Normal file
View File

63
upload_youtube_montage.py Normal file
View File

@ -0,0 +1,63 @@
"""
upload_montage_youtube.py
Standalone entry point to upload a rendered Fortnite montage video to YouTube.
Assumes that the input video is a montage and therefore does NOT rely on a notes.* file.
Handles:
- Validating input parameters (video path)
- Deriving vertical format from filename
- Generating dynamic description via OpenAI
- Uploading to YouTube with appropriate metadata
- Flagging video as private if DEBUG is enabled
Author: Llama Chile Shop
Created: 2025-07-22
"""
import os
import sys
from pathlib import Path
from modules.config import DEBUG
from modules.yt_poster import upload_video
from modules.description_utils import generate_montage_description
from authorize_youtube import get_authenticated_service
def main():
"""
Entry point to handle YouTube upload of montage video.
Usage:
python upload_montage_youtube.py <video_path>
"""
if len(sys.argv) != 2:
print("Usage: python upload_montage_youtube.py <path_to_rendered_video>")
sys.exit(1)
# Extract stream date from parent directory (Z:\2025.06.20)
video_path = Path(sys.argv[1])
stream_date = video_path.parents[1].name # '2025.06.20'
if not os.path.isfile(video_path):
print(f"[ERROR] File not found: {video_path}")
sys.exit(1)
video_name = os.path.basename(video_path)
is_vertical = "-vert" in video_path.stem or "-vertical" in video_path.stem
# Generate a dynamic, humorous montage description
description = generate_montage_description()
# Upload the video to YouTube
upload_video(
file_path=video_path,
is_vertical=is_vertical,
stream_date=stream_date,
description=description,
private=DEBUG
)
if __name__ == "__main__":
main()