Build Your First AI Agent in 30 Minutes
Stop reading about AI agents and build one. A practical, no-fluff guide to creating an autonomous agent that actually does something useful.
Everybody’s talking about AI agents. Almost nobody is building them. Let’s fix that.
What We’re Building
A simple autonomous agent that:
- Monitors a subreddit for posts matching specific keywords
- Summarizes the relevant posts using an LLM
- Sends you a daily digest via email or Telegram
Total cost: $0 (using free tiers). Time: 30 minutes.
Prerequisites
- Python 3.10+
- An OpenAI API key (or any LLM provider)
- A Reddit account (for API access)
Step 1: The Perception Layer
Your agent needs eyes. In this case, Reddit’s API:
import requests
def fetch_reddit_posts(subreddit, keywords, limit=25):
url = f"https://www.reddit.com/r/{subreddit}/new.json?limit={limit}"
headers = {"User-Agent": "AI-Agent/1.0"}
response = requests.get(url, headers=headers)
posts = response.json()["data"]["children"]
relevant = []
for post in posts:
title = post["data"]["title"].lower()
if any(kw.lower() in title for kw in keywords):
relevant.append({
"title": post["data"]["title"],
"url": f"https://reddit.com{post['data']['permalink']}",
"score": post["data"]["score"],
"text": post["data"].get("selftext", "")[:500]
})
return relevant
Step 2: The Brain
Now it needs to think. Feed the posts to an LLM for summarization:
from openai import OpenAI
client = OpenAI()
def summarize_posts(posts):
if not posts:
return "No relevant posts found today."
posts_text = "\n\n".join(
f"**{p['title']}** (Score: {p['score']})\n{p['text']}"
for p in posts
)
response = client.chat.completions.create(
model="gpt-4o-mini", # cheap and fast
messages=[{
"role": "system",
"content": "Summarize these Reddit posts into a brief, actionable digest. Highlight anything that represents an opportunity or trend."
}, {
"role": "user",
"content": posts_text
}]
)
return response.choices[0].message.content
Step 3: The Action Layer
It needs to do something with its analysis. Send it to you:
import requests
def send_telegram(bot_token, chat_id, message):
url = f"https://api.telegram.org/bot{bot_token}/sendMessage"
requests.post(url, json={
"chat_id": chat_id,
"text": message,
"parse_mode": "Markdown"
})
Step 4: The Loop
Wire it together and schedule it:
import schedule
import time
def run_agent():
posts = fetch_reddit_posts(
subreddit="artificial",
keywords=["agent", "autonomous", "automation", "bot"],
limit=50
)
summary = summarize_posts(posts)
send_telegram(BOT_TOKEN, CHAT_ID, f"🤖 Daily AI Digest\n\n{summary}")
schedule.every().day.at("09:00").do(run_agent)
while True:
schedule.run_pending()
time.sleep(60)
What You Just Built
A three-layer autonomous agent:
- Perception: Reddit API monitoring
- Reasoning: LLM-powered analysis
- Action: Automated delivery
This is the exact same architecture behind every AI agent, trading bot, and automation system. The only difference is scale and sophistication.
Next Steps
- Add more data sources (Twitter, news APIs, Discord)
- Add decision-making (should I alert immediately or batch?)
- Add memory (track what you’ve already seen)
- Deploy it to a server so it runs 24/7
The best agent is the one that’s running. Build ugly, ship fast, iterate.