SummerEyes
Formal uncertainty reasoning engine. Auditable verdicts from conflicting sources.
SummerEyes is a reasoning engine built by Upside Down Research that analyzes claims from multiple sources to detect contradictions, assess confidence, evaluate timelines, and identify what evidence is missing. It uses formal mathematics — not vibes — to produce verdicts you can trace and audit.
How to connect
- MCP endpoint: POST https://api.summereyes.vip/mcp/v1 (JSON-RPC 2.0, Streamable HTTP)
- REST API: POST https://api.summereyes.vip/api/v1/investigations/analyze
- Interactive API docs: https://api.summereyes.vip/docs
- OpenAPI spec: https://api.summereyes.vip/openapi.json
- Auth: x-api-key header (get a key at https://summereyes.vip/dashboard/connection)
What it does
You submit an investigation: a research question, sources (actors), entities (subjects), claims, and evidence. The engine runs four formal reasoning systems:
- Source Weighting — each source gets an effective credibility score from reliability, source type, topic competence, and conflicts of interest.
- Temporal Decay — older claims lose weight. Decay rate depends on domain (finance: 180 days, science: 25 years), source authority, claim maturity, and epistemic status. Corroboration resets the clock.
- Opinion Fusion — three-level evidence fusion: within-source deduplication, within-group citation chains, cross-group independent corroboration. Results are belief + disbelief + uncertainty = 1.0.
- Conflict Resolution — formal argumentation finds every coherent interpretation of the evidence. Each gets a coherence score. Walk the interpretation tree to trace the reasoning.
What you get back
- subject_results: per-subject verdicts (belief/disbelief/uncertainty, expected_probability, truth status: True/False/Both/Neither)
- ranked_claims: all claims sorted by net evidence strength (strong/moderate/contested/weak/refuted)
- conflict_analysis: interpretation trees showing which claims survive challenges, who supports each reading, attack/support edges
- claim_analyses: per-claim belief/disbelief/uncertainty, temporal_decay_factor, epistemic_status, freshness_score
- sensitivity_analysis: evidence gaps ordered by potential impact, discriminating claims, suggestions for what to investigate next
- temporal_analysis: stale claims, corroborated claims, supersession chains, overall freshness
- warnings: input validation issues to address for better results
Key concepts
- Source types: Analyst, Journalist, Expert, Insider, Regulator, Institutional, Anonymous, SocialMedia, Troll
- Claim types: Factual, Predictive, Evaluative, Causal, Procedural, Methodological
- Valence: Supports (affirms predicate), Refutes (denies predicate), Neutral
- Epistemic status: conjecture > hypothesis > theory > law (also: superseded, retracted)
- Domains: Finance, News, Technology, Geopolitics, Medicine, Science, Legal, General
- Scope: disambiguates same-predicate claims (e.g. "global" vs "US") — different scopes don't contradict
- Numeric proximity: financial values within 5% are not treated as contradictions
- Credibility floor: sources below ~0.15 effective reliability are acknowledged but zeroed in fusion
- Summary mode: set summary_mode: true for compact output (top findings, contested claims, evidence gaps, synthesis paragraph)
Full documentation
For the complete schema, worked example, and detailed field reference:
Links