How Prism actually scores a stock.
No black box, no hidden weights. Every number on every page traces back to a public input and a published rule.
1. Data sources
Prism is a derivative tool — every fundamental or price metric is sourced from a named third party, never invented internally. We deliberately keep the source list short so users can audit any number end-to-end.
| Yahoo Finance (yahoo-finance2) | Quote, profile, key statistics, financials, calendar events, earnings history, holders, news. |
| SEC EDGAR (13F-HR) | Superinvestor positions and quarterly changes for the curated investor universe. |
| SEC EDGAR (Form 4) | Insider open-market buys and sells (filtered for direct, non-derivative, post-cleanup). |
| Coverage universe (~250 names) | Hand-curated US large/mid-cap list intended as a research starting point, not a screener of all listed equities. |
Yahoo Finance fields are best-effort; thinly traded names may have null fields, in which case the affected rule reports as "missing data" rather than failing or passing silently.
2. Investor frameworks
Each preset framework is a list of rules of the form{ metricId, op, threshold, weight }. Evaluation runs the rule list against the stock's current metrics, computes a weighted pass-rate (0-100), and maps to a fit label using the framework's fitBands. The full rule list is inspectable in Studio and every threshold is user-adjustable.
Why this design. A framework is a published opinion of a real investor; we encode the most-cited screen they describe. For example, the Graham Number rule encodes the formula √(22.5 × EPS × BVPS) exactly as published. We do not try to capture qualitative judgement (moats, management quality, narrative) — those remain the user's job.
3. Fair-value pillars
Three independent estimates are blended, weighted by quality:
- Earnings power — trailing EPS × quality-adjusted multiple. Multiple is 12× base, +0.4× per ROE point above 10%, +0.6× per growth point above 5%, capped 8×–35×.
- FCF yield — FCF/share ÷ a demanded yield. Demanded yield is 5% baseline, +1pp per 0.5× net-debt/EBITDA above 1, capped at 12%.
- Asset value — book value/share × an asset multiple from 1.0× (low ROE) up to 3.0× (25%+ ROE).
Compounders (ROE ≥ 15%) weight earnings + FCF at 0.45/0.45/0.10. Capital destroyers (ROE < 15%) flip the asset weight up to 0.40 so we don't capitalise optimistic earnings on a deteriorating book.
The margin-of-safety price defaults to 30% below the central estimate. The band visualisation shows current price relative to MoS / low / base / high.
4. Reverse DCF
Inverts the standard DCF: given price and a discount rate, what FCF growth must the market be assuming? Solved by bisection over [-15%, 60%] for 60 iterations. Discount is risk-adjusted (8% baseline + 1pp per net-debt/EBITDA above 1, capped 13%). Terminal growth is 2.5%, explicit projection 10 years.
Verdicts (conservative / reasonable / aggressive / extreme / unattainable) are calibrated against the company's own historical EPS+revenue blend, when available; absolute thresholds are used as fallback.
5. A-F scorecard
Seven dimensions, each scored 0-100 from a small fixed set of metrics, then mapped to a letter:
| Quality | ROE, gross margin, operating margin |
| Valuation | Discount to fair-value base (or fallback P/E) |
| Growth | EPS growth and revenue growth |
| Balance sheet | Debt/Equity, current ratio |
| Momentum | Position within 52-week range |
| Income | Yield, payout-ratio penalty |
| Risk | Inverse of leverage, short-interest days-to-cover |
Overall is a weighted average — Quality 25%, Valuation 25%, Growth 15%, Balance Sheet 10%, Risk 15%, Momentum 5%, Income 5%. Categories with insufficient data are skipped and weights renormalised.
Letter cutoffs: A+ ≥ 95, A ≥ 88, A- ≥ 82, B+ ≥ 76, B ≥ 70, B- ≥ 64, C+ ≥ 58, C ≥ 52, C- ≥ 46, D ≥ 35, F otherwise.
6. Mistake detector
Pattern-matches the current metrics against nine named failure modes: value trap, quality trap, yield trap, momentum trap, cyclical peak, leverage trap, growth deceleration, accounting red flag, dilution risk. Each pattern requires multiple corroborating signals — never a single metric — so flags are deliberately conservative.
Severity (Low / Medium / High) is determined by how many of the pattern's signals are present. Each flag exposes its evidence, what would reduce the risk, and what to monitor — so the user can disagree with the flag from a position of information.
7. Disagreement engine
Classifies the nature of the debate, not just polarisation. Each framework is assigned a style bucket (quality / value / deep-value / growth / momentum / income). The endorsing and rejecting camps' style sets feed a small rule-set that maps to named debate kinds: quality-vs-valuation, growth-vs-expectations, turnaround-vs-value-trap, momentum-vs-fundamentals, income-vs-sustainability, cyclical-timing, plus consensus shortcuts.
Polarisation is the standard deviation of all framework scores, scaled to 0-100 — a secondary signal under the headline classification.
8. What we don't model
Important things Prism makes no attempt to capture:
- Qualitative moats — switching costs, network effects, brand, regulation.
- Management quality — capital allocation track record, incentives, integrity.
- Industry structure — Porter's-five-forces dynamics, competitive intensity.
- Macro / cycles — interest-rate regime, currency, geopolitical risk.
- Path-dependent risks — pending litigation, FDA outcomes, single-customer concentration.
A clean Prism read is necessary but not sufficient for ownership. Treat the verdict as a checklist that the quantitative case is intact, not as an instruction.
For research and educational purposes only. Not financial advice. Past performance does not guarantee future results. Conduct independent due diligence before making any investment decision.