Know Who is Speaking
The internet made everyone a publisher. Social media made everyone an authority. Algorithms rewarded heat over light. The result is an information ecosystem where the loudest voice and the most credible voice have become indistinguishable โ and where readers have no reliable way to tell them apart.
This is not a technology problem. It is an infrastructure problem. We have built remarkable systems for distributing ideas at scale. We have built almost nothing for verifying the credibility of the people distributing them.
More content is published every 48 hours than was created in all of human history before 2003. Volume has become an enemy of discernment.
Research consistently shows the least informed are often the most certain โ while genuine experts hedge, qualify, and acknowledge complexity.
A tweet, a blog post, a comment thread โ content travels without its author. Readers inherit the idea with no knowledge of its origin.
Media formats that present "both sides" treat a Nobel laureate and an enthusiastic amateur as equivalent sources. They are not.
The cost of this infrastructure failure is measured in public health crises fueled by misinformation, in financial markets moved by unqualified analysis, in policy debates hijacked by confident amateurs, and in a slow erosion of the public's ability to reason toward verifiable truth.
We propose a perennial, living credibility infrastructure โ not a one-time certification, not a gatekeeping institution, but a continuously updated, transparent, and portable record of what a person actually knows, has done, and has demonstrated in a given domain.
This is not censorship. It is context. It does not prevent anyone from speaking. It ensures that when they do, readers understand exactly what weight to give their words.
Think of it as a nutrition label for ideas. You are free to eat what you choose. But you deserve to know what's in it.
Universities, think tanks, and consultancies gain a trusted framework to present their contributors' genuine qualifications โ not just titles, but demonstrated knowledge.
Writers and thinkers who have built genuine expertise gain a portable, verifiable credential that travels with their work wherever it appears.
Audiences gain the context they have always deserved: a clear, honest signal of the credibility behind every claim they encounter.
A new norm emerges: that publishing without context is incomplete. That quality rises when credibility is visible. That trust is rebuilt.
Every published idea exists in a context of who produced it and why they are โ or are not โ positioned to do so. That context belongs with the idea, not locked in an author bio no one reads. We commit to making context visible, persistent, and honest.
A doctorate is one signal. A decade of practitioner experience is another. A track record of accurate public predictions is a third. Our framework measures the full spectrum of demonstrated knowledge โ formal education, applied practice, published record, peer recognition, and real-world outcomes โ because no single axis captures the truth of expertise.
A credential is a snapshot. Real expertise is a living record. Our system is perennial: profiles update as new contributions are made, as speaking engagements accumulate, as publications appear, as communities validate. Credibility is not a destination but a practice.
An eminent cardiologist speaking about nutrition policy is not an authority on nutrition policy. An acclaimed novelist offering economic analysis carries no special weight in economics. Our framework is domain-specific by design: it does not inflate expertise from one field to another, because doing so would reproduce exactly the problem we are solving.
This is not an old-guard gatekeeping mechanism. Every contributor begins somewhere. Our framework includes entry-level categories that are honest about limited track records while still providing context โ and that create a visible, achievable path toward greater credibility over time.
A credibility profile that lives on one platform and dies when content is shared is no profile at all. Our system is designed to be embeddable, linkable, and persistent โ so that when an idea travels, its author's verified context travels with it.
Our framework places every contributor into one of five levels within a given domain, based on a composite assessment of formal qualifications, practical experience, published record, community recognition, and demonstrated predictive accuracy. Click any level to explore what it means.
Engaged, curious, and learning. Contributes personal perspective with limited formal or demonstrated expertise in this domain.
Typical signals:
Reader takeaway: Personal opinion. Worth reading as a perspective, not as an authoritative claim.
Actively building expertise through formal coursework, certifications, structured study, or entry-level practice.
Typical signals:
Reader takeaway: Building knowledge. Claims should be cross-referenced with more established sources.
Professional-grade expertise built through sustained practice, demonstrated outputs, and recognized contributions within a field.
Typical signals:
Reader takeaway: Reliable domain expertise. Claims are grounded in substantial experience and verifiable record.
Recognized leader whose work has shaped the direction of a domain โ through research, practice, teaching, or institutional influence.
Typical signals:
Reader takeaway: High-confidence source. Has shaped the conversation in this domain, not just participated in it.
A foundational voice whose ideas have entered the permanent record of a field โ through paradigm-shifting research, transformative practice, or generational influence.
Typical signals:
Reader takeaway: Canonical source. Has not merely participated in the field โ has helped create it.
Our assessment draws on eight categories of credibility signals, weighted by domain and context. No single signal is disqualifying or conclusive โ credibility is a composite, not a checkbox.
Degrees, postgraduate study, professional qualifications. The foundation โ necessary in many fields, insufficient on its own in most.
Years in practice, roles held, organizations served. The gap between knowing and doing is where credibility is often forged or exposed.
Books, peer-reviewed papers, reports, substantial articles. The traceable, reviewable evidence of what someone has contributed to the knowledge commons.
Conference presentations, keynotes, invited talks. Signals community trust and willingness to subject ideas to live scrutiny.
Blogs, newsletters, podcasts โ with attention to quality, consistency, and the caliber of audience engagement, not raw follower counts.
Awards, editorial roles, advisory board appointments, citations from others in the field. How the field itself evaluates the contributor.
For applicable fields: a track record of forecasts, recommendations, or analyses that can be verified against outcomes. The rarest and most underused signal.
Recent courses, updated certifications, new research activity. Evidence that expertise is being maintained, not just asserted based on past credentials.
We are not asking institutions to gatekeep. We are asking them to be transparent. We are not asking readers to trust only the credentialed. We are asking them to demand context from everyone who seeks to influence them.
The norm we want to establish is simple, and it is overdue:
Not their name. Not their title. Not their employer. But their actual, demonstrated, verifiable relationship to the ideas they are sharing โ and your right, as a reader, to know exactly what that is.
This is our commitment to a world where verifiable truth has infrastructure, where expertise is visible, and where the journey from confident assertion to demonstrated knowledge is one that every voice can take โ and that every reader can see.
We are building coalitions with universities, research institutions, investment firms, consultancies, and publishing platforms who share this commitment.
Several organizations have built partial versions of what we're proposing. None has done it fully, publicly, or in a way that travels with content. Here is the landscape โ and what each model offers for inspiration or partnership.
What they do: ORCID is a nonprofit that provides every researcher with a unique, persistent digital identifier โ the "ORCID iD" โ that travels with them across institutions, publishers, and career stages. It links a researcher to all their verified outputs: publications, peer reviews, grants, affiliations, and more. Major publishers like Elsevier, Wiley, PLOS, and Sage are integrated. Funding bodies including NIH and the UK Research Councils now require ORCID iDs during grant submission.
Why it matters for us: ORCID is the closest existing infrastructure to a portable, verifiable contributor identity. It currently focuses on the academic research world. The gap we can close: ORCID doesn't score, tier, or communicate credibility to lay readers โ it is a record, not an interpretation.
Website: orcid.org ยท Members: 850+ institutions ยท IDs issued: 15M+
What they do: Founded in 1998, GLG pioneered the "expert network" industry โ a commercial marketplace that connects investors, consultants, and corporations with verified subject-matter experts for paid one-on-one consultations. Their network spans over 1.2 million professionals across 150+ countries. Clients include hedge funds, private equity firms, strategy consultancies, and pharmaceutical companies. GLG actively vets experts, profiles their experience, and matches them to client inquiries.
Why it matters for us: GLG has proven that the market will pay significant premiums โ often $1,000+ per hour โ to access verified expertise. This validates the core premise: credibility has real economic value. The gap: GLG's expert profiles are private, not public. They serve the buyers of expertise, not the general reading public.
Key competitors building on this model: AlphaSense-Tegus (merged 2024, adds AI-powered transcript search across 100K+ expert calls), Third Bridge, AlphaSights, Guidepoint. The "Big Five" expert networks collectively represent a ~$2 billion annual market.
What they do: All three MBB firms run parallel "expert tracks" alongside their generalist consulting career paths. At McKinsey, experts progress through roles โ Specialist, Expert, Expert Associate Partner, Expert Partner โ each representing increasing depth and recognition within a defined domain (e.g., Digital, Life Sciences, Operations). Experts are evaluated on domain depth, not generalist consulting skills, and are explicitly distinguished from generalist partners in terms of how they are deployed.
Why it matters for us: These firms have operationalized the insight that domain expertise requires different credentialing than general management ability โ and have built institutional structures that reflect it. The gap: these systems are entirely internal. Clients see the firm's brand, not the individual expert's tier.
What they do: Web of Science (now owned by Clarivate) operates Publons โ a platform that tracks, verifies, and publicly displays researchers' peer review activity. Combined with citation metrics like the H-Index (which measures both publication volume and citation impact) and the Field-Weighted Citation Impact (FWCI) used by Times Higher Education rankings, these systems create a quantitative fingerprint of a researcher's influence and standing in their field.
Why it matters for us: This is the most advanced existing system for communicating credibility through numbers. The gap: it covers only academic researchers; it is opaque to non-academic audiences; and it rewards quantity of output over quality of public communication.
What they do: Gartner publicly distinguishes between "Analyst," "Vice President," "Distinguished Vice President," and "Fellow" โ their highest research designation. These titles are visibly attached to every piece of published research, giving enterprise clients an immediate signal about the seniority and depth of the source. Gartner's model is one of the few in any industry where contributor credentials are routinely surfaced alongside content as a feature, not a footnote.
Why it matters for us: Gartner demonstrates that tiering contributors and making tiers visible increases perceived value of content โ and that readers actually use this information. The gap: Gartner's tiers reflect internal career progression, not independently verified domain expertise. And they serve paying enterprise clients, not the open public.
Every existing system described above suffers from at least one of three fatal limitations: it is private (not visible to general readers), siloed (only works within one institution or platform), or passive (a record of past activity rather than an active, interpreted credibility signal). ORCID comes closest to portability but does not communicate to lay audiences. Expert networks come closest to verified expertise but serve commercial buyers, not readers. Bibliometrics come closest to scoring but only apply to academics.
The strategic insight is this: the infrastructure already exists in fragments. Our initiative is the synthesis โ and the public-facing translation layer that turns verified expertise into readable, actionable context for every person who encounters an idea.