Oriel is not a ChatGPT wrapper. Here is what is underneath.
Oriel produces a single reading-length brief on a donor prospect, ready well before your next meeting. People reasonably ask whether that's just ChatGPT in a nicer wrapper. It isn't, and this page is the long answer. There are three layers: a segment-specific research pipeline, a synthesis layer built around a working prospect researcher's judgment, and a designed publication as output. Each layer does work a general-purpose AI cannot do, and the three together are the product.
The pipeline
Every brief is built by a workflow that runs against public sources a general-purpose AI doesn't know to look at, in an order it doesn't know to follow. SEC filings and proxy disclosures. Full-text 990s from the IRS. County real estate records. Donor walls and annual reports from peer institutions. Board listings and foundation grant histories. Named giving vehicles — endowed scholarships, facilities naming, donor societies. State corporation filings for private owners.
The pipeline knows which sources matter for which kind of wealth. A private business owner is researched differently from an inherited-wealth heir, who is researched differently from a private equity limited partner. Each wealth segment has its own retrieval logic, its own evidence priorities, and its own failure modes to watch for.
This is the layer where "I'll just use ChatGPT" quietly breaks down. ChatGPT will return a LinkedIn bio and call it research. Oriel knows that for a private owner the binding evidence lives in state corporation filings and property records, not in press releases — and that for an inherited-wealth prospect the family foundation's 990 is worth more than the prospect's own résumé.
The discernment model
The research is then handed to a synthesis layer built around a working prospect researcher's judgment — not a generic prompt asking a model to summarize findings. The model has been taught the specific moves that distinguish a useful brief from a confident-sounding one.
It knows that a named endowed scholarship implies a capacity floor, not a capacity gap. It knows that corporate foundation giving routed through a CEO is not personal philanthropy and must be flagged rather than counted. It knows when a wide capacity range is honest information and when it's hedging. It commits to characterizations rather than listing tradeoffs, because that is what a trusted colleague does across a desk and what a database structurally cannot.
This is the part that is hardest to copy and easiest to underestimate. A prompt is a paragraph. The Oriel discernment layer is the distillation of years of practitioner work into a system that produces the same calls a senior researcher would make on the same evidence. A competitor can copy the idea overnight. Reproducing the judgment requires the practitioner.
The output
The brief itself is built as a publication, not a chat response. Marginalia carry EINs, filing dates, and property tax amounts next to the prose they support. The capacity rating module names a lead signal, the evidence that corroborates it, and an explicit "what would change this" line. Every factual claim is numbered against a source on the evidence page.
The point of the format is that a major gift officer should be able to spot-check any claim in fifteen seconds and, once a few check out, trust the rest. Density of specificity is the product.
The honest moat
The fair version of the competitive question is: what is the moat against a general model that gets better every six months? Three things. The retrieval pipeline is segment-specific and compounds as we add segments. The discernment layer encodes practitioner judgment that doesn't exist in any training set. And the peer cohort dataset — real comparable prospects with documented giving — grows with every brief and becomes structurally harder to replicate the longer Oriel runs.
A wrapper around ChatGPT can be built in a weekend. None of the above can.