MyPosts supports multiple AI providers and models, giving you flexibility to choose based on quality, cost, and speed requirements.
Available Models
Anthropic Claude
Claude models excel at creative, nuanced content with strong context understanding.
Claude 3 Opus
- Best for: High-quality, creative content
- Strengths: Superior writing, nuance, creativity
- Cost: $15/$75 per million tokens (input/output)
- Speed: Slower (5-10 seconds)
- Use when: Quality is paramount
Claude 3 Sonnet
- Best for: Balanced quality and speed
- Strengths: Good quality, faster than Opus
- Cost: $3/$15 per million tokens
- Speed: Fast (2-5 seconds)
- Use when: Daily posting, good quality needed
Claude 3 Haiku
- Best for: High-volume, simple content
- Strengths: Very fast, cost-effective
- Cost: $0.25/$1.25 per million tokens
- Speed: Very fast (<2 seconds)
- Use when: Bulk generation, simple posts
OpenAI GPT
GPT models offer broad capabilities and wide availability.
GPT-4 Turbo
- Best for: Complex reasoning, technical content
- Strengths: Broad knowledge, consistency
- Cost: $10/$30 per million tokens
- Speed: Moderate (3-7 seconds)
- Use when: Technical topics, detailed threads
GPT-3.5 Turbo
- Best for: Standard content, cost-efficiency
- Strengths: Fast, reliable, affordable
- Cost: $0.50/$1.50 per million tokens
- Speed: Fast (1-3 seconds)
- Use when: Regular posting, budget-conscious
X.AI Grok
Grok offers unique Twitter/X-optimized capabilities.
Grok Beta
- Best for: Twitter-native content
- Strengths: Current events, Twitter culture
- Cost: $5/$15 per million tokens
- Speed: Fast (2-4 seconds)
- Use when: Trending topics, Twitter-specific content
Model Selection Guide
By Use Case
Creative Content
- Claude 3 Opus - Best overall quality
- GPT-4 Turbo - Good alternative
- Claude 3 Sonnet - Budget-friendly option
News & Current Events
- Grok Beta - Most current information
- GPT-4 Turbo - Good for analysis
- Claude 3 Sonnet - Thoughtful commentary
Technical Content
- GPT-4 Turbo - Best technical accuracy
- Claude 3 Opus - Good explanations
- Claude 3 Sonnet - Balanced option
High Volume Posting
- Claude 3 Haiku - Fastest and cheapest
- GPT-3.5 Turbo - Good balance
- Grok Beta - Twitter-optimized
By Budget
Premium (No budget constraints)
- Primary: Claude 3 Opus
- Secondary: GPT-4 Turbo
- Threads: Claude 3 Opus
Standard (Balanced)
- Primary: Claude 3 Sonnet
- Secondary: Grok Beta
- Bulk: Claude 3 Haiku
Budget (Cost-conscious)
- Primary: Claude 3 Haiku
- Secondary: GPT-3.5 Turbo
- Special posts: Claude 3 Sonnet
Cost Management
Understanding Token Usage
- 1 token ≈ 4 characters
- Average tweet: 50-100 tokens
- With context/prompts: 200-500 tokens total
Monthly Estimates
Based on 10 posts per day:
Premium Usage
- Model: Claude 3 Opus
- Tokens/month: ~150,000
- Estimated cost: $8-12/month
Standard Usage
- Model: Claude 3 Sonnet
- Tokens/month: ~150,000
- Estimated cost: $2-3/month
Budget Usage
- Model: Claude 3 Haiku
- Tokens/month: ~150,000
- Estimated cost: $0.20-0.30/month
Setting Limits
Configure spending limits in settings:
- Go to AI Settings
- Set monthly token limit
- Set monthly spending limit
- Configure alert threshold (e.g., 80%)
Model Configuration
Setting Default Models
defaults: provider: anthropic model: claude-3-sonnet-20240229 fallbacks: - provider: openai model: gpt-4-turbo-preview - provider: anthropic model: claude-3-haiku-20240307
Per-Topic Models
Assign specific models to topics:
topics: technical: model: gpt-4-turbo-preview creative: model: claude-3-opus-20240229 news: model: grok-beta
API Keys Setup
Anthropic
- Visit Anthropic Console
- Create API key
- Add to MyPosts settings
OpenAI
- Visit OpenAI Platform
- Generate API key
- Add to MyPosts settings
X.AI Grok
- Visit X.AI Developer
- Request access
- Add credentials to MyPosts
Performance Optimization
Response Time
Factors affecting generation speed:
- Model complexity
- Prompt length
- Current API load
- Network latency
Caching
MyPosts caches:
- Model responses for similar prompts
- Token counts for cost calculation
- API availability status
Fallback Strategy
Configure automatic fallbacks:
- Primary model fails → Secondary model
- Rate limit reached → Switch provider
- API down → Queue for retry
Best Practices
Model Rotation
Rotate models to:
- Vary content style
- Manage costs
- Avoid API limits
- Test performance
Prompt Optimization
Tips for better results:
- Be specific in topics
- Provide context
- Use examples
- Set tone/style
Quality vs. Cost
Balance considerations:
- High-engagement posts → Premium models
- Regular updates → Standard models
- Bulk content → Budget models
Monitoring Usage
Track your AI usage:
- Dashboard: Real-time token counter
- Usage Page: Detailed breakdown
- Reports: Monthly summaries
- Alerts: Limit warnings
Troubleshooting
Common Issues
"Model not available"
- Check API key validity
- Verify model name
- Check service status
"Rate limit exceeded"
- Wait for reset
- Switch to another provider
- Upgrade API plan
"Poor quality output"
- Try different model
- Improve prompts
- Add more context