AI-Powered Content Automation Pipeline
What is this called?
This is an Automated Content Pipeline. More specifically, an AI-Driven Publishing Pipeline. The infrastructure pattern underneath it is called a Scheduled CI/CD Workflow.
On a CV, you write it like this:
Architected an AI-driven content automation pipeline using GitHub Actions, Claude API, Replicate, Cloudflare R2, Supabase, and LinkedIn REST API. The system generates and publishes three posts per day with zero manual input.
It is a real, recognizable system architecture. Companies pay engineers good money to build exactly this.
What does it actually do?
At 8am, 1pm, and 6pm UTC every day, a script wakes up and does the following:
- Picks a topic from a curated pool based on the time of day
- Calls Claude to write a full blog article, a LinkedIn teaser, and an image prompt
- Sends that image prompt to an AI image model and gets back a cinematic hero image
- Downloads the image and uploads it to permanent cloud storage
- Saves the full article to the database
- Posts the teaser to LinkedIn with the image attached and a link to the live article
The whole thing takes about three minutes. Then it sleeps until the next scheduled run.
Architecture at a glance
GitHub Actions (cron: 3x/day)
|
v
Claude API (Anthropic)
Writes the LinkedIn teaser, blog title, slug,
description, full article, and image prompt
|
v
Replicate API (openai/gpt-image-2)
Generates the hero image from the prompt
|
v
Cloudflare R2
Image is downloaded from Replicate and
uploaded here for permanent storage
|
v
Supabase (PostgreSQL)
Blog post row is inserted with all fields
including the permanent R2 image URL
|
+-----> Next.js blog at amazesofts.com/blog/[slug]
| Renders the article and hero image
| OG metadata uses the image for link previews
|
+-----> LinkedIn REST API v202604
Uploads image to LinkedIn CDN
Posts the teaser with image and blog link
Why each technology was chosen
GitHub Actions instead of Vercel Crons
Vercel's free plan caps every serverless function at 60 seconds. The full pipeline, generating content, creating an image, uploading to storage, saving to the database, and posting to LinkedIn, takes two to four minutes. There is no way to do it inside 60 seconds.
GitHub Actions does not have this limit. Free tier jobs can run for up to six hours. It also supports cron scheduling natively, so the switch was straightforward.
Anthropic Claude API
Claude handles all text generation. The system prompt bans em dashes, generic corporate phrasing, and filler language. It writes like a senior developer talking to a peer, not like a content marketing template. It also generates the image prompt, which is what makes each image topically relevant instead of generic.
Replicate with openai/gpt-image-2
This model produces high quality, cinematic images. The call uses aspect ratio 3:2, which fits both blog hero images and LinkedIn link previews. The image prompt is written by Claude based on the article topic, so each image is actually relevant to what the post is about.
Cloudflare R2 instead of Supabase Storage
Replicate image URLs expire after roughly 24 hours. If you store a Replicate URL in your database, the image on your blog will be broken by the next morning.
Two options were considered:
| Supabase Storage | Cloudflare R2 | |
|---|---|---|
| Free storage | 50 MB | 10 GB |
| Egress fees | Yes | None |
| CDN included | No | Yes |
| Good for production | No | Yes |
50 MB runs out fast with daily image uploads. R2 gives 10 GB for free with no bandwidth charges. The image is downloaded from Replicate right after generation and pushed to R2, where it lives permanently.
Supabase
Supabase is the database for the blog. Every post is a row with a slug, title, description, full markdown content, category, slot, image URL, and publish timestamp. The Next.js frontend queries Supabase to render the blog listing and each article page.
LinkedIn REST API v202604
LinkedIn's API requires a three-step process to attach an image to a post. First you request an upload URL, then you PUT the raw image bytes to that URL, then you use the returned image URN when creating the post. The blog article's OG metadata is set using the image URL from Supabase, so LinkedIn's link crawler automatically picks up the hero image as the preview.
Database setup
Run this once in the Supabase SQL Editor:
CREATE TABLE blog_posts (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
slug TEXT UNIQUE NOT NULL,
title TEXT NOT NULL,
description TEXT NOT NULL,
content TEXT NOT NULL,
category TEXT NOT NULL,
slot TEXT NOT NULL,
linkedin_post_id TEXT,
image_url TEXT,
published_at TIMESTAMPTZ DEFAULT NOW(),
created_at TIMESTAMPTZ DEFAULT NOW()
);
ALTER TABLE blog_posts ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Public read" ON blog_posts FOR SELECT USING (true);
CREATE POLICY "Service insert" ON blog_posts FOR INSERT WITH CHECK (true);
CREATE POLICY "Service update" ON blog_posts FOR UPDATE USING (true);
Cloudflare R2 setup
- Log in to dash.cloudflare.com
- Go to R2 Object Storage and create a new bucket
- Open the bucket, go to Settings, find Public Development URL, and enable it
- Copy the
https://pub-xxxx.r2.devURL. This is yourCF_R2_PUBLIC_URL - Go back to the R2 overview page (not inside any bucket) and click Manage R2 API Tokens
- Create a new token with Object Read and Write permissions, scoped to your bucket only
- Copy the Access Key ID and Secret Access Key. They are shown only once
Your Account ID is in the S3 API URL shown in the bucket settings:
https://[ACCOUNT_ID].r2.cloudflarestorage.com/...
GitHub Actions setup
The workflow file
Location: .github/workflows/linkedin-post.yml
name: LinkedIn Auto Post
on:
schedule:
- cron: "0 8 * * *"
- cron: "0 13 * * *"
- cron: "0 18 * * *"
workflow_dispatch:
jobs:
post:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v3
with:
version: 10
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "pnpm"
- run: pnpm install --frozen-lockfile
- run: node .github/scripts/post.mjs
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
REPLICATE_API_TOKEN: ${{ secrets.REPLICATE_API_TOKEN }}
PUBLIC_SUPABASE_URL: ${{ secrets.PUBLIC_SUPABASE_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.SUPABASE_SERVICE_ROLE_KEY }}
LINKEDIN_ACCESS_TOKEN: ${{ secrets.LINKEDIN_ACCESS_TOKEN }}
LINKEDIN_MEMBER_ID: ${{ secrets.LINKEDIN_MEMBER_ID }}
LINKEDIN_ORG_ID: ${{ secrets.LINKEDIN_ORG_ID }}
CF_R2_ACCOUNT_ID: ${{ secrets.CF_R2_ACCOUNT_ID }}
CF_R2_ACCESS_KEY_ID: ${{ secrets.CF_R2_ACCESS_KEY_ID }}
CF_R2_SECRET_ACCESS_KEY: ${{ secrets.CF_R2_SECRET_ACCESS_KEY }}
CF_R2_BUCKET: ${{ secrets.CF_R2_BUCKET }}
CF_R2_PUBLIC_URL: ${{ secrets.CF_R2_PUBLIC_URL }}
One important note about the order: pnpm/action-setup must come before actions/setup-node when you use cache: 'pnpm'. If you flip them, the node setup step tries to find pnpm before it is installed and the whole job fails.
Secrets to add in GitHub
Go to your repo, then Settings, then Secrets and variables, then Actions.
| Secret | Where to find it |
|---|---|
ANTHROPIC_API_KEY | console.anthropic.com |
REPLICATE_API_TOKEN | replicate.com account settings |
PUBLIC_SUPABASE_URL | Supabase project settings, API tab |
SUPABASE_SERVICE_ROLE_KEY | Supabase project settings, API tab (use service_role, not anon) |
LINKEDIN_ACCESS_TOKEN | LinkedIn OAuth flow (see below) |
LINKEDIN_MEMBER_ID | LinkedIn API /userinfo response |
LINKEDIN_ORG_ID | Your LinkedIn company page numeric ID |
CF_R2_ACCOUNT_ID | Cloudflare R2, visible in the S3 API URL |
CF_R2_ACCESS_KEY_ID | Cloudflare R2 API token creation page |
CF_R2_SECRET_ACCESS_KEY | Cloudflare R2 API token creation page |
CF_R2_BUCKET | The name of your R2 bucket |
CF_R2_PUBLIC_URL | The pub-xxxx.r2.dev URL from bucket settings |
LinkedIn OAuth setup
LinkedIn uses OAuth 2.0. The access token is valid for 60 days. LinkedIn does not support refresh tokens on the free developer tier, so you need to renew it manually every two months.
How to get the token:
- Create a LinkedIn app at developer.linkedin.com
- Add the
w_member_socialandr_basicprofileOAuth scopes - Trigger the Authorization Code flow to get a short-lived code
- Exchange the code immediately for an access token via
POST /oauth/v2/accessToken(the code expires in 30 seconds) - Store the token as
LINKEDIN_ACCESS_TOKENin GitHub secrets
If you want posts to come from a company page instead of a personal profile, set LINKEDIN_ORG_ID to the numeric ID of your LinkedIn organization.
Errors we hit and how we fixed them
pnpm not found in GitHub Actions
Cause: actions/setup-node ran before pnpm/action-setup installed pnpm.
Fix: Move pnpm/action-setup above actions/setup-node in the workflow.
Claude returned JSON with code fences
Cause: Even when told not to, Claude sometimes wraps JSON in triple backtick blocks.
Fix: Strip the fences with a regex before calling JSON.parse().
Supabase fetch failed Cause: Supabase free tier pauses projects after a week of inactivity. Also check that all secrets are set. Fix: Go to the Supabase dashboard and resume the project.
Vercel build failed with invalid maxDuration
Cause: An old API route had maxDuration = 300, which exceeds the Hobby plan limit of 60 seconds.
Fix: Delete the route. GitHub Actions handles the scheduling now, so the route is not needed.
Blog images broken after 24 hours Cause: Replicate delivery URLs expire. Fix: Download the image immediately after generation and upload it to Cloudflare R2.
Image URL was a URL object, not a string
Cause: The Replicate SDK's .url() method returns a URL object in newer versions.
Fix: Use .url().href or check with typeof before storing.
What it costs
Based on 3 posts per day, roughly 90 posts per month.
| Service | Free tier | Estimated monthly cost |
|---|---|---|
| GitHub Actions | 2,000 minutes/month | Free (uses about 270 min) |
| Anthropic Claude | Pay per token | $1 to $3 |
| Replicate gpt-image-2 | Pay per run | $9 to $18 |
| Cloudflare R2 | 10 GB, 10M operations | Free |
| Supabase | 500 MB database | Free |
| Vercel | Hobby plan | Free |
Total: roughly $10 to $20 per month. All infrastructure is free. The only real cost is the AI API calls.
What you could build on top of this
- Connect a custom domain to the R2 bucket for branded image URLs and proper CDN caching
- Add a reminder script that emails you seven days before the LinkedIn token expires
- Track LinkedIn post performance by pulling impressions and reactions back into Supabase
- Prevent topic repetition by checking the last seven days of posts before picking a new topic