Reddit API Guide: Complete Developer Tutorial (2026)
Build apps with Reddit's API using OAuth2 and practical code examples
Hypereal로 구축 시작하기
단일 API를 통해 Kling, Flux, Sora, Veo 등에 액세스하세요. 무료 크레딧으로 시작하고 수백만으로 확장하세요.
신용카드 불필요 • 10만 명 이상의 개발자 • 엔터프라이즈 지원
Reddit API Guide: Complete Developer Tutorial (2026)
Reddit's API lets you programmatically access one of the internet's largest communities. You can build bots, analytics tools, content aggregators, monitoring systems, and more. This guide covers everything you need to get started with the Reddit API in 2026, including authentication, key endpoints, rate limits, and working code examples.
Reddit API in 2026: What You Need to Know
Since the API pricing changes in 2023, Reddit's free API tier is more limited but still available for non-commercial use and small-scale projects. Here is the current landscape:
| Tier | Rate Limit | Cost | Use Case |
|---|---|---|---|
| Free (non-commercial) | 100 requests/minute | Free | Personal projects, research, bots |
| Free (commercial) | 100 requests/minute | Free (with approval) | Low-volume commercial apps |
| Enterprise | Custom | Contact Reddit | High-volume commercial use |
All access requires OAuth2 authentication. The old cookie-based authentication and unauthenticated access are no longer supported.
Step 1: Create a Reddit App
- Go to reddit.com/prefs/apps.
- Scroll down and click "create another app...".
- Fill in the details:
| Field | Value |
|---|---|
| Name | Your app name |
| Type | Script (for personal use) or Web app (for others to use) |
| Description | Brief description of your app |
| About URL | Your website or GitHub repo |
| Redirect URI | http://localhost:8080 (for script apps) |
- Click Create app.
- Note your Client ID (under the app name) and Client Secret.
# Your credentials
export REDDIT_CLIENT_ID="your_client_id"
export REDDIT_CLIENT_SECRET="your_client_secret"
export REDDIT_USERNAME="your_username"
export REDDIT_PASSWORD="your_password"
Step 2: Authenticate with OAuth2
Reddit requires OAuth2 for all API requests. For script-type apps (personal use), use the password grant flow.
Using cURL
# Get an access token
curl -X POST 'https://www.reddit.com/api/v1/access_token' \
-u "$REDDIT_CLIENT_ID:$REDDIT_CLIENT_SECRET" \
-d "grant_type=password&username=$REDDIT_USERNAME&password=$REDDIT_PASSWORD" \
-A "MyApp/1.0 by YourUsername"
Response:
{
"access_token": "your_access_token",
"token_type": "bearer",
"expires_in": 86400,
"scope": "*"
}
Using Python (with PRAW)
PRAW (Python Reddit API Wrapper) is the recommended library for Python. It handles authentication, rate limiting, and pagination automatically.
pip install praw
import praw
reddit = praw.Reddit(
client_id="your_client_id",
client_secret="your_client_secret",
username="your_username",
password="your_password",
user_agent="MyApp/1.0 by YourUsername",
)
# Verify authentication
print(f"Logged in as: {reddit.user.me()}")
Using JavaScript (with snoowrap)
npm install snoowrap
import Snoowrap from 'snoowrap';
const reddit = new Snoowrap({
userAgent: 'MyApp/1.0 by YourUsername',
clientId: process.env.REDDIT_CLIENT_ID,
clientSecret: process.env.REDDIT_CLIENT_SECRET,
username: process.env.REDDIT_USERNAME,
password: process.env.REDDIT_PASSWORD,
});
Important: Set a Descriptive User-Agent
Reddit requires a unique User-Agent string and will throttle or block generic ones:
# Good
MyResearchBot/1.0 by u/YourUsername
# Bad
python:requests
Mozilla/5.0
Step 3: Core API Endpoints
All authenticated requests go to https://oauth.reddit.com (not www.reddit.com).
Fetch Subreddit Posts
Python (PRAW):
# Get the top 10 hot posts from r/programming
subreddit = reddit.subreddit("programming")
for post in subreddit.hot(limit=10):
print(f"{post.score:>6} | {post.title}")
print(f" | {post.url}")
print()
JavaScript (snoowrap):
const posts = await reddit.getSubreddit('programming').getHot({ limit: 10 });
posts.forEach(post => {
console.log(`${post.score} | ${post.title}`);
console.log(` ${post.url}`);
});
cURL:
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
-A "MyApp/1.0 by YourUsername" \
"https://oauth.reddit.com/r/programming/hot?limit=10"
Search Reddit
# Search across all of Reddit
results = reddit.subreddit("all").search("machine learning", sort="relevance", time_filter="month", limit=20)
for post in results:
print(f"r/{post.subreddit} | {post.title}")
Get Comments on a Post
submission = reddit.submission(id="post_id_here")
submission.comments.replace_more(limit=0) # Flatten comment tree
for comment in submission.comments.list():
print(f"u/{comment.author}: {comment.body[:100]}")
Submit a Post
# Text post
subreddit = reddit.subreddit("test")
subreddit.submit(
title="My Post Title",
selftext="This is the body of my post."
)
# Link post
subreddit.submit(
title="Check out this article",
url="https://example.com/article"
)
Reply to a Comment
comment = reddit.comment(id="comment_id_here")
comment.reply("Thanks for sharing! Great insight.")
Get User Information
user = reddit.redditor("spez")
print(f"Username: {user.name}")
print(f"Karma: {user.link_karma + user.comment_karma}")
print(f"Account created: {user.created_utc}")
# Get user's recent posts
for post in user.submissions.new(limit=5):
print(f" {post.title}")
Key Endpoints Reference
| Endpoint | Method | Description |
|---|---|---|
/r/{subreddit}/hot |
GET | Hot posts in a subreddit |
/r/{subreddit}/new |
GET | New posts |
/r/{subreddit}/top |
GET | Top posts (add ?t=day/week/month/year/all) |
/r/{subreddit}/search |
GET | Search within a subreddit |
/api/submit |
POST | Create a new post |
/api/comment |
POST | Post a comment |
/api/vote |
POST | Upvote/downvote |
/api/subscribe |
POST | Subscribe/unsubscribe from a subreddit |
/user/{username}/about |
GET | User profile info |
/user/{username}/submitted |
GET | User's posts |
/user/{username}/comments |
GET | User's comments |
/api/v1/me |
GET | Current authenticated user |
/subreddits/search |
GET | Search for subreddits |
Rate Limits and Best Practices
Current Rate Limits
| Resource | Limit |
|---|---|
| Requests per minute | 100 (authenticated) |
| Requests per minute | 10 (unauthenticated, deprecated) |
| Token refresh | Every 24 hours |
| Listing pagination | Max 100 items per page, 1000 total |
Headers to Monitor
Check these response headers to track your rate limit usage:
# With raw requests
import requests
response = requests.get(
"https://oauth.reddit.com/r/programming/hot",
headers={
"Authorization": f"Bearer {access_token}",
"User-Agent": "MyApp/1.0 by YourUsername",
}
)
print(f"Remaining: {response.headers.get('x-ratelimit-remaining')}")
print(f"Reset in: {response.headers.get('x-ratelimit-reset')} seconds")
print(f"Used: {response.headers.get('x-ratelimit-used')}")
Best Practices
Respect rate limits. PRAW handles this automatically. If using raw HTTP, implement exponential backoff.
Cache responses. Reddit data does not change every second. Cache subreddit listings for 60-120 seconds.
Use
.jsonsuffix for quick testing. You can append.jsonto most Reddit URLs to see the raw JSON (though this does not count as API usage):
https://www.reddit.com/r/programming/hot.json
- Handle pagination correctly. Use the
afterparameter with the last item's fullname:
# Manual pagination
params = {"limit": 100, "after": None}
all_posts = []
for _ in range(10): # 10 pages max
response = requests.get(
"https://oauth.reddit.com/r/programming/new",
params=params,
headers=headers,
)
data = response.json()
posts = data["data"]["children"]
all_posts.extend(posts)
after = data["data"]["after"]
if not after:
break
params["after"] = after
Building a Simple Reddit Bot
Here is a complete example of a bot that monitors a subreddit for keywords and replies:
import praw
import time
reddit = praw.Reddit(
client_id="your_client_id",
client_secret="your_client_secret",
username="your_bot_username",
password="your_bot_password",
user_agent="KeywordBot/1.0 by u/YourUsername",
)
subreddit = reddit.subreddit("test")
keywords = ["help", "question", "how to"]
print("Bot is running...")
# Stream new comments in real-time
for comment in subreddit.stream.comments(skip_existing=True):
body_lower = comment.body.lower()
if any(keyword in body_lower for keyword in keywords):
print(f"Found keyword in comment by u/{comment.author}")
print(f" {comment.body[:100]}")
# Reply to the comment
try:
comment.reply(
"Hi! It looks like you might need help. "
"Check out the subreddit wiki for common answers."
)
print(" Replied successfully!")
except Exception as e:
print(f" Failed to reply: {e}")
time.sleep(2) # Avoid rate limiting
Error Handling
Common errors and how to handle them:
| Status Code | Meaning | Solution |
|---|---|---|
| 401 | Token expired or invalid | Refresh your access token |
| 403 | Forbidden (banned, wrong scope) | Check app permissions and account status |
| 429 | Rate limited | Wait and retry with backoff |
| 500 | Reddit server error | Retry after a delay |
| 503 | Reddit is overloaded | Wait 30-60 seconds and retry |
import time
import requests
def reddit_request(url, headers, max_retries=3):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.json()
elif response.status_code == 429:
wait = int(response.headers.get("x-ratelimit-reset", 60))
print(f"Rate limited. Waiting {wait}s...")
time.sleep(wait)
elif response.status_code >= 500:
time.sleep(2 ** attempt * 10)
else:
response.raise_for_status()
raise Exception("Max retries exceeded")
Wrapping Up
The Reddit API remains a powerful tool for building bots, analytics platforms, and social monitoring tools. Use PRAW for Python projects, snoowrap for JavaScript, and always respect rate limits and Reddit's API terms of service.
If you are building applications that combine social media data with AI-powered media generation, such as creating visual summaries of trending posts or generating AI video content from Reddit threads, check out Hypereal AI. Hypereal provides affordable APIs for AI image and video generation that integrate easily into your data pipelines.
