A purpose-built NLP API that classifies Reddit posts, news, and social financial content across 7 dimensions in a single call. 6–30× cheaper than LLMs. 20× faster.
import finsignals client = finsignals.Client("fs_your_key_here") result = client.classify( ticker="NVDA", body="NVDA to $200 EOY 🚀🚀 DD inside" ) # Real response envelope — all 7 heads in outputs[0]: { "model_version": "2.0.0", "credits_charged": 1.0, "outputs": [{ "sentiment": { "label": "positive", "positive": 0.89 }, "directionality": { "label": "bullish", "bullish": 0.84 }, "quality": { "label": "relevant", "relevant": 0.76 }, "relevance_score": 0.9137, "author_confidence": 0.5802, "sarcasm": false }] }
Every API call returns all seven heads simultaneously. No extra prompts, no retries, no JSON parsing failures.
At 500,000 posts / month, the difference is ~$1,400. That's $16,800 / year. And you'd still need to write the prompt, handle malformed JSON, and tolerate 200ms+ latency on every call.
| Dimension | FinSignals Pro | Claude Sonnet 4 | GPT-4o | GPT-4o-mini | Self-hosted LLM |
|---|---|---|---|---|---|
| Cost · 500K posts/mo | ~$224 | ~$1,200 | ~$1,750 | ~$210 | $1,500–$5,700 |
| Total response time · single call | ~1,500 ms | 2,000–4,000 ms | 2,000–4,500 ms | 1,500–3,000 ms | 2,000–8,000 ms |
| Real-time batch throughput synchronous · results in one response | ~8 ms / item 256 items in <2,000 ms total | not availableasync batch only · minutes–hours | not availableasync batch only · up to 24 hrs | not availableasync batch only · up to 24 hrs | possibledepends on serving setup |
| Reddit social content | Finance-tuned | General model | General model | General model | General model |
| Structured output | Guaranteed always | Sometimes fails | Sometimes fails | Sometimes fails | Often fails |
| 7 heads · one call | ✓ native | Extra tokens | Extra tokens | Extra tokens | Extra prompting |
| Prompt engineering | None required | Required | Required | Required | Extensive |
| Understands "diamond hands", "DD", "🚀🚀" | ✓ Trained on it | Partial | Partial | Partial | Unlikely |
Credit-based pricing. Single call = 1.00 credit. Batch = 1.00 for the first item + 0.70 for each additional item in the same request. No hidden fees.
Need more than 10M credits/month? Talk to us about Enterprise → · PAYG credit packs available from the dashboard (100K, 1M, 10M credits, no expiry while subscribed)
Email only. No credit card. Your 1,000 free credits activate immediately and your key appears on the next screen.
Find it in the dashboard under API Keys. Set it as FINSIGNALS_API_KEY in your environment.
pip install finsignals — or send a plain HTTP POST with your key in the header. No SDK required.
All 7 heads come back in under 15ms. Batch up to 256 posts in one call for the 30% credit discount.
import finsignals client = finsignals.Client(api_key="fs_your_key_here") # Single classification result = client.classify( ticker="AAPL", body="Apple crushes Q3 earnings, up 12% AH" ) print(result.sentiment.label) # "positive" print(result.relevance_score) # 0.9412 print(result.credits_charged) # 1.0 # Batch: 1.0 + 0.7*(n-1) credits (e.g. 2 items = 1.7) results = client.classify_batch([ {"ticker": "TSLA", "body": "Tesla misses deliveries..."}, {"ticker": "NVDA", "body": "Blackwell demand insane 🚀"}, ])
FinSignals classifies text. You need a source of text first. Two clean options:
import praw import finsignals reddit = praw.Reddit(client_id="...", client_secret="...", user_agent="...") client = finsignals.Client("fs_your_key_here") # Fetch top posts from r/wallstreetbets posts = list(reddit.subreddit("wallstreetbets").hot(limit=100)) items = [{"ticker": "GME", "title": p.title, "body": p.selftext} for p in posts] # Classify the whole batch in one call results = client.classify_batch(items) # Filter for high-confidence bullish signals signals = [ (post, out) for post, out in zip(posts, results) if out.directionality.label == "bullish" and out.quality.label == "relevant" and out.relevance_score > 0.7 and not out.sarcasm ] print(f"{len(signals)} signals from {len(posts)} posts")
💡 Most users combine automated data collection with FinSignals to build a fully hands-off classification pipeline. The data source is interchangeable. The classification layer is not.
Full pipeline tutorial →Free tier. No credit card. 1,000 credits are waiting for you right now.