Hey HN — I built Polaris because AI agents have a trust problem. They retrieve information but have no way to evaluate whether it's reliable, biased, or contradicted by other sources.
An agent pulling ten articles about the same event treats all of them as equally valid — even if three contradict each other. There's no mechanism to weigh evidence, flag conflicts, or assign confidence.
Polaris is a news intelligence API that solves this. An automated pipeline crawls sources across 18 verticals, then runs each story through entity extraction, counter-argument generation, and bias detection. The output is structured intelligence briefs with confidence scores and full source transparency.
The key endpoint is /verify — send any claim, get back a verdict (true/false/misleading/unverified) with a confidence score, supporting evidence, contradicting evidence, and a nuances array for the gray areas.
Other things agents get:
- Bias detection comparing how outlets frame the same story
- Entity tracking across the news cycle with trend direction
- Embeddable trust badges for any URL
- Confidence scoring on every result
SDKs for Python, TypeScript, LangChain, CrewAI, Vercel AI, and MCP. Free tier: 1K API calls/month.
You can also try it instantly via Telegram — message @PolarisNewsBot with any topic or /verify any claim.
Would love feedback on the API design and what you'd want from a verification layer if you were building it into an agent.
Hey HN — I built Polaris because AI agents have a trust problem. They retrieve information but have no way to evaluate whether it's reliable, biased, or contradicted by other sources.
An agent pulling ten articles about the same event treats all of them as equally valid — even if three contradict each other. There's no mechanism to weigh evidence, flag conflicts, or assign confidence.
Polaris is a news intelligence API that solves this. An automated pipeline crawls sources across 18 verticals, then runs each story through entity extraction, counter-argument generation, and bias detection. The output is structured intelligence briefs with confidence scores and full source transparency.
The key endpoint is /verify — send any claim, get back a verdict (true/false/misleading/unverified) with a confidence score, supporting evidence, contradicting evidence, and a nuances array for the gray areas.
Other things agents get: - Bias detection comparing how outlets frame the same story - Entity tracking across the news cycle with trend direction - Embeddable trust badges for any URL - Confidence scoring on every result
SDKs for Python, TypeScript, LangChain, CrewAI, Vercel AI, and MCP. Free tier: 1K API calls/month.
You can also try it instantly via Telegram — message @PolarisNewsBot with any topic or /verify any claim.
Would love feedback on the API design and what you'd want from a verification layer if you were building it into an agent.