ClarkAI
Seed Stage Dossier · April 2026 · Not For Public Circulation
Startup Report

Clark AI is building the Hierarchically Detached Federated Mixture-of-Experts intelligence infrastructure system. A 70-billion-parameter backbone — the circuit — to which 10,000 specialist expert models of 100M–500M parameters each — the lightbulbs — are attached. Total system parameter coverage: 5 trillion+. India first. Then the world.

Founding Insight

Clark began with a simple but profound observation. The “council of experts,” popularized by Perplexity AI, proved that multiple models collaborating could outperform a single system. Intelligence scaled with diversity.

But Clark asked a deeper question: what if intelligence did not emerge from many independent systems, but from one unified intelligence capable of internally orchestrating expertise?

Instead of stitching models together, Clark built a single backbone that coordinates thousands of internal experts— each specialized, each context-aware, all operating within one cohesive system.

This is the shift: from external collaboration → to internal specialization.

Architectural Shift — From Many Systems to One Intelligence
Dimension Traditional “Council of Experts” Clark Architecture
System Design Multiple independent models collaborating Single unified backbone with internal experts
Coordination External orchestration between systems Native routing inside one intelligence
Latency High (cross-model communication overhead) Low (intra-system routing)
Context Retention Fragmented across models Shared global context
Scalability Complex integration overhead Add experts without rewriting system
Core Philosophy Many minds working together One mind containing many
Why Clark Wins
Capability Clark Advantage Impact
Deep Specialization 10,000 domain experts coordinated by backbone Near-human expert-level precision per domain
Efficient Inference Only 3–12 experts activated per query Massive capability at fraction of compute cost
Composable Intelligence Experts dynamically combined per problem Solves multi-domain problems natively
Federated Growth External entities can plug in experts Exponential ecosystem expansion
System Evolution Add experts, not parameters to backbone Scales without retraining entire model
Paradigm From static models → to living intelligence infrastructure
The Implication

This is not an improvement. It is a redefinition. Not many systems cooperating — but one system that contains all expertise within itself.

If an individual expert fails, the cost of rectification is limited to retraining that specific expert rather than the entire network. This stands in contrast to the current paradigm, where updates often require retraining the full model. As a result, recurring computational expenses are significantly reduced . Instead of relying on hundreds or thousands of GPUs, the system can be maintained and updated using only tens of GPUs, enabled by our detached federated topology.

₹144 CrSeed Round
₹600 CrPost-Money Val
70B + 10KBackbone + Experts
5T+System Param Coverage
₹39.6TGlobal TAM
GlobalMarket · India First
The single insight that changes everything: The 70B backbone is not designed to answer questions — it is the circuit. It routes, decomposes, and synthesises. The 10,000 expert models are the lightbulbs — each trained on one narrow domain, completely detached from each other, only connected to the circuit. You get 5 trillion parameters of domain coverage at the inference cost of 3–12 experts per query. No competitor can replicate this architecture without rebuilding from scratch.
The Circuit & The Lightbulbs

Understanding Clark's Hierarchically Detached Federated Mixture-of-Experts Architecture

The Backbone — The Circuit
70 Billion
Parameter Router

The backbone does not answer questions. It is the electrical circuit — the copper wire running through the building(Clark Network). It carries current(Decomposed Information), distributes intelligence, and is the infrastructure through which every lightbulb can function. On its own it produces no light. With 10,000 experts attached, it illuminates everything.

Three functions only: (1) Decompose — understand the deep structure of a problem, its logical dependencies, its domain category. (2) Route — decide which experts to activate and in what sequence. (3) Synthesise — receive expert outputs and compose one coherent, verified, traceable response.

The Experts — The Lightbulbs
10,000 Specialist
Models 100M–500M Params

Each expert is a lightbulb — engineered to illuminate one specific domain with extraordinary precision. They are completely detached from one another. A cardiac surgery expert has no awareness of the derivatives trading expert two sockets away. They are architecturally isolated — only connected to the backbone circuit.

The "federated" dimension: bulbs can be trained by different entities — IIT Madras trains a constitutional law expert, a pharma company trains a drug-interaction expert, Clark trains a mathematics expert — and all three plug into the same circuit. The circuit does not care who made the bulb.

L1
Query Ingestion & Parsing
Raw input parsed into structured problem specification — intent, constraints, domain signals, decomposition into sub-problems. Ambiguity resolved before the circuit activates.
L2
70B Backbone Routing
The circuit. Analyses structured problem specification and generates expert activation plan — which bulbs to switch on, in what sequence or parallel configuration, with what sub-tasks. This is Clark's proprietary core.
L3
Expert Execution Layer
Activated experts (3–12 per query) receive their sub-tasks and execute independently. Each is 100M–500M parameters — small enough to run efficiently, deep enough to be genuinely expert. Detached from each other. Only the circuit connects them.
L4
Verification & Consistency
Expert outputs checked for internal consistency, logical correctness, factual grounding, and cross-expert coherence. Contradictions flagged and resolved. Only verified outputs reach synthesis.
L5
Synthesis & Output
Backbone synthesises expert outputs into one coherent, traceable response. Every conclusion tagged to its contributing expert. User receives a structured, verifiable answer with a visible reasoning chain.
L6
Feedback Loop & Registry Update
Each interaction improves backbone routing precision. New expert models registered, certified, and added to the registry continuously. The system grows more capable by adding bulbs — without ever rewiring the circuit.
Sample Expert Domains — 40 of 10,000
Differential EquationsIFRS AccountingIndian Constitutional LawOrganic Chemistry SynthesisKubernetes NetworkingFDA Drug ApprovalOption Pricing ModelsTamil Grammar & SyntaxCRISPR Gene EditingGST Compliance IndiaStructural EngineeringEpidemiological ModellingUK Company LawThermodynamic AnalysisMalayalam NLPTransfer PricingSoil MechanicsML TheoryMedical ImagingCarbon CreditsHindi LiteratureQuantum AlgorithmsSEBI RegulationsAerodynamic SimulationEnglish Contract LawBengali Sentiment AnalysisBiostatisticsSolar Panel EfficiencyEU GDPRHindi Legal DraftingProtein FoldingReal Estate Valuation IndiaSQL OptimisationAyurvedic PharmacologySemiconductor PhysicsGujarati BusinessInternational ArbitrationFluid DynamicsMarathi NLPCybersecurity+ 9,960 more domains
$15.7BIndia AI Market 2026 NASSCOM FY26 ↗
35% CAGR→ $71B by 2030 Source: NASSCOM 2026
₹10,372 CrIndiaAI Mission Budget PIB Government ↗
17M+Indian Developers on GitHub GitHub Octoverse 2025 ↗
$643MAI VC Funding India 2025 TechCrunch 2025 ↗
Executive Dashboard
All key metrics — every number sourced, every assumption documented, every month of burn mapped
Total Addressable Market
₹39.6T
Three global layers — Individual ₹10T + Enterprise ₹19.6T + API/Developer ₹10T. India is the launch pad. The circuit serves the world.
Seed Round / Valuation
₹144 Cr / ₹600 Cr
24% investor stake · 3,157,894 new shares · ₹75.23 Cr strategic buffer reserve (52.25% of seed held unallocated as structural protection)
The System Architecture
70B + 10,000 Experts
70B backbone = the circuit. 10,000 experts (100M–500M params each) = the lightbulbs. Total system parameter coverage: 5 trillion+. Inference cost: 3–12 bulbs per query.
Break-Even Timeline
Month 22–24
EBITDA positive. FCF break-even Month 24–30. Revenue target: ₹6 Cr/month at Month 24. ARR at Month 24: ₹72 Crore.
Expert Marketplace Model
Open + Federated
Third parties train domain experts and register them to Clark's circuit. Revenue share: 70% contributor / 30% Clark. This is not a feature — it is the primary long-term growth engine.
GPU Infrastructure
256 → 72 H100s
256×H100 SXM5 training Mo.0–5 · drops to 72 inference Mo.6–23 · ₹150/hr rental · CoreWeave ↗ · pure OpEx
24-Month Revenue vs. Burn
Month Date Phase Revenue Monthly Burn Net Cash Flow Event
Mo.0Apr 2026Training₹0₹4.16 Cr(₹4.16 Cr)Circuit wiring begins · 256×H100 live · 3 founders
Mo.1May 2026Training₹0₹3.32 Cr(₹3.32 Cr)100B token data pipeline · tokeniser trained
Mo.2Jun 2026Training₹0₹3.23 Cr(₹3.23 Cr)1B backbone baseline · first expert models in dev
Mo.3Jul 2026Training₹0₹3.53 Cr(₹3.53 Cr)7B backbone begins · provisional patent filed
Mo.4Aug 2026Training₹0₹3.52 Cr(₹3.52 Cr)7B MMLU >60% · first 10 experts certified
Mo.5Sep 2026Training₹0₹3.61 Cr(₹3.61 Cr)30B backbone · 100 experts in registry
Mo.6Oct 2026Inference₹0₹1.78 Cr(₹1.78 Cr)Scale to 72 GPUs · 70B live · expert routing active
Mo.7Nov 2026Beta₹2.0 L₹1.81 Cr(₹1.79 Cr)🎯 FIRST PAYING CUSTOMER · beta API live
Mo.8Dec 2026Beta₹5.0 L₹1.90 Cr(₹1.85 Cr)Beta growing · 1st enterprise contract
Mo.9Jan 2027Beta₹11.0 L₹2.21 Cr(₹2.10 Cr)3 enterprise customers · SOC 2 audit starts · 300+ experts
Mo.10Feb 2027Growth₹21.0 L₹2.20 Cr(₹1.99 Cr)Public API · expert marketplace opens to 3rd-party contributors
Mo.11Mar 2027Growth₹45.0 L₹2.38 Cr(₹1.93 Cr)12 customers · external contributor payouts begin
Mo.12Apr 2027Growth₹80.0 L₹2.68 Cr(₹1.88 Cr)18 customers · ARR ₹9.6 Cr · Series A data room live · SOC 2 Type I
Mo.13May 2027Growth₹1.15 Cr₹2.74 Cr(₹1.59 Cr)25 customers · 500+ expert models registered
Mo.14Jun 2027Growth₹1.50 Cr₹2.88 Cr(₹1.38 Cr)32 customers · US market entry planning
Mo.15Jul 2027Growth₹1.95 Cr₹3.13 Cr(₹1.18 Cr)40 customers · 1,000+ experts in registry
Mo.16Aug 2027Scaling₹2.40 Cr₹2.95 Cr(₹0.55 Cr)Series A preparation · marketplace revenue material
Mo.17Sep 2027Scaling₹2.85 Cr₹2.84 Cr+₹0.01 CrSeries A initiated · near EBITDA break-even
Mo.18Oct 2027Scaling₹3.20 Cr₹2.75 Cr+₹0.45 CrEBITDA positive · 2,000+ experts · US first customer
Mo.19Nov 2027Scaling₹3.55 Cr₹2.75 Cr+₹0.80 CrProfitable months sustained
Mo.20Dec 2027Scaling₹4.00 Cr₹2.76 Cr+₹1.24 CrGlobal expansion active · 3,000+ experts
Mo.21Jan 2028Scaling₹4.45 Cr₹2.76 Cr+₹1.69 CrSeries A closes · ISO 27001 initiated
Mo.22Feb 2028Scaling₹5.00 Cr₹2.76 Cr+₹2.24 Cr🎯 FCF BREAK-EVEN ACHIEVED · 4,000+ experts
Mo.23Mar 2028Scaling₹6.00 Cr₹2.77 Cr+₹3.23 CrMonth 24 target met · 5,000+ experts registered
₹144 Crore Seed Capital — Complete Deployment
# Category 24M Total % of Seed Month 0 Month 12 Source
1GPU Rental — Training (256×H100 × 6 Mo.)₹16,58,88,00011.52%₹2,76,48,000₹0CoreWeave ↗ · 256×₹1,08,000/mo×6
2GPU Rental — Inference (72×H100 × 18 Mo.)₹13,99,68,0009.72%₹0₹77,76,00072 GPUs from Mo.6 onward · inference + expert serving
3Employee Salaries & Benefits₹26,63,70,00018.50%₹16,50,000₹1,33,20,0003 founders → 100 FTEs Mo.17 · Chennai 2026 benchmarks
4MacBook Laptops (M5 Air PRO/STD)₹1,36,39,9000.95%₹5,99,600₹16,78,700PRO ₹1,49,900 · STD ₹1,19,900 · on hire date
5Servers & Storage Hardware₹79,15,0000.55%₹51,55,000₹0API servers + storage nodes · one-time CapEx
6Incubation / Office / Utilities₹2,35,85,0001.64%₹4,85,500₹10,36,500IITM Research Park ↗ + internet + facilities
7Security / DevOps / SOC 2₹2,57,44,9621.79%₹7,91,666₹12,34,998Datadog · SOC2 Type I/II · CI/CD · security tooling
8Software Licenses₹1,21,60,9430.84%₹89,718₹6,07,207GitHub · Jira · Slack · W&B · Google Workspace · Notion
9Dataset Licensing₹1,00,00,0000.69%₹50,00,000₹0Training datasets · HuggingFace + proprietary corpora
10EPF + Gratuity (Statutory)₹2,23,85,3111.55%₹1,38,663₹11,19,392Employer PF 12% + gratuity · mandatory EPFO compliance
TOTAL PLANNED OPERATIONAL SPEND₹68,76,57,11647.75%
STRATEGIC BUFFER RESERVE — emergency · scale-up · unforeseen₹75,23,42,88452.25%Unallocated — structural protection
TOTAL SEED CAPITAL ACCOUNTED₹1,44,00,00,000100.0%
Competitor Intelligence — Verified Live Pricing (March 2026)
Company Model Input $/1M Output $/1M Indic? Clark Advantage Source
OpenAI
San Francisco
GPT-5.4 $2.50 $15.00 ❌ None 40% cheaper · 22 Indic languages · expert routing depth vs monolithic OpenAI Pricing ↗
Anthropic
San Francisco
Claude Sonnet 4.6 $3.00 $15.00 ❌ None Same price tier · adds Indic · DPDP-compliant by architecture Anthropic Pricing ↗
Google DeepMind
Mountain View
Gemini 2.5 Pro $1.25–$2.50 $10–$15 ❌ None No search-revenue conflict · India-sovereign · 10K detached experts vs monolith Gemini Pricing ↗
Mistral AI
Paris, France
Mistral Large 3 $2.00 $6.00 ❌ None Expert specialisation depth impossible in single dense model Mistral Pricing ↗
Krutrim / Ola
Bengaluru, India
Krutrim V2 (12B) ₹7–17/M Usage ✅ 22 langs 70B backbone + 10K experts vs single 12B dense model — different architecture class Krutrim Cloud ↗
Sarvam AI
Bengaluru, India
Sarvam 105B Free Free ✅ 22 langs Voice/translation focus vs Clark's reasoning orchestration — complementary, not competitive Sarvam Pricing ↗
AI4Bharat
IIT Madras, Chennai
IndicBERT / NLP Free (OS) Free ✅ All 22 Academic only — no commercial API. Expert model contributor and strategic partner. AI4Bharat ↗
CLARK AI ★
Chennai · IITM Research Park
Clark System (70B + 10K experts) $0.35 target $1.50 target ✅ 22+ languages Hierarchically Detached Federated MoE · 5T+ param coverage · India-sovereign · open expert marketplace This document
20 Volumes · 54 Chapters · 372 Questions · Complete Institutional Architecture · 1,330 Pages
VOL. I
Foundational Doctrine
Strategic thesis · the circuit-and-lightbulb architecture · problem root cause · timing & macro alignment
Ch. 1–4 · Pages 1–90Core+
CH 01Executive War Briefpp. 3–22
1.1 The Company — One Sentence

Clark is an intelligence infrastructure company that builds the world's first Hierarchically Detached Federated Mixture-of-Experts reasoning system — a 70-billion-parameter backbone that routes intelligence to 10,000 specialist expert models (each 100M to 500M parameters), producing structured, verifiable, traceable outputs at a fraction of the cost of equivalent monolithic systems. The backbone is the circuit. The experts are the lightbulbs. The circuit does not produce light. It is the infrastructure through which every specialist illuminates exactly what they were trained to illuminate, precisely when required.

Together, a 70B backbone and 10,000 experts provide the knowledge coverage of a system with over 5 trillion parameters, but at the inference cost of activating only the handful of experts relevant to each specific query. This is the architecture the entire AI industry will eventually converge toward. Clark is building it first, from Chennai, and deploying it to the world.

1.2 Why This Architecture — The Core Economic Insight

The AI industry has convinced itself that intelligence scales by making one model larger. This is correct but incomplete. Scaling a dense model improves average performance across all domains simultaneously, which sounds appealing until you consider the economics. To improve a dense model's IFRS accounting performance by 10%, you must increase the entire model's capacity — all the chemistry knowledge, all the history, all the code — by a proportional amount. You are paying the computational cost of a trillion-parameter system to improve one domain.

Clark improves IFRS performance by training a better IFRS expert. The cost is proportional only to the IFRS expert's size — 200M parameters, trained exclusively on accounting literature. The leverage is different by orders of magnitude. And when a user asks a question that spans IFRS accounting, Indian company law, and international transfer pricing simultaneously, Clark activates three experts in parallel. A dense model activates no experts — it activates everything and hopes the correlation patterns in its training data surface something coherent.

The circuit produces no light by itself. Every lightbulb illuminates exactly what it was trained to illuminate. Together they light up a room that no single bulb, however powerful, could illuminate alone.

1.3 Global Market — India First, Not India Only

Clark launches in India because India provides the optimal calibration environment: 1.4 billion potential users, 17 million developers, cost-sensitive economics that demand genuine architectural efficiency rather than compute brute-force, and 22 scheduled languages that require multi-lingual expert coverage from day one. India is where the circuit gets wired correctly under real conditions.

But the architecture is language-agnostic and jurisdiction-agnostic by fundamental design. Training a Tamil legal expert or an English common law expert is the same protocol applied to different corpora — both are 100M–500M parameter models registered to the same backbone. Training a German tax code expert or a Japanese medical terminology expert follows the same path. Global expansion is not a rebuild. It is a registration event. New language, new jurisdiction, new domain: train the expert, certify it, register it. The circuit already exists. The socket is already there. You are adding a bulb.

The global addressable market — ₹39.6 trillion annually across individual users, enterprise, and the API-developer ecosystem — is the total worldwide demand for reliable, structured, verifiable intelligence across every domain, language, and jurisdiction. Clark is building the infrastructure to serve that demand from a base in Chennai that the entire world connects to.

1.4 The Three Structural Advantages of Detached Architecture
Advantage 1: Infinite Scalability Without RetrainingAdding a new domain requires training one new expert model and registering it. The backbone is not retrained. The other 9,999 experts are not disrupted. A traditional dense model company adding a new domain must retrain a billion-parameter system at enormous cost. Clark adds a lightbulb. This is the fundamental infrastructure advantage that no competitor can match without switching architectures.
Advantage 2: Federated Contribution at Zero Marginal CostThe expert marketplace allows third parties — universities, enterprises, research institutions — to contribute specialist models. Clark provides the circuit. Contributors provide the bulbs. Revenue share: 70% to the contributor, 30% to Clark. The knowledge base expands at near-zero marginal cost to Clark. This is the marketplace network effect that transforms Clark from a product into a platform.
Advantage 3: Genuine Expert-Level Depth Per DomainA 200M-parameter model trained exclusively on IFRS accounting standards knows IFRS accounting better than a 200B general-purpose model trained on everything. Specialisation is a capability multiplier. Clark's experts are genuinely expert within their defined scope — not approximately expert, not statistically likely to be expert, but certifiably, traceably, verifiably expert.
1.5 What Consensus Gets Catastrophically Wrong

The prevailing consensus assumes that artificial intelligence will advance primarily through scale — bigger models, more parameters, more training data. This assumption is deeply embedded in how every major AI lab allocates capital and sets research priorities. It is also the source of a fundamental exploitable error.

Scaling a dense model cannot produce the same depth per domain as a specialised model trained exclusively on that domain. A 70B dense model must distribute its 70 billion parameters across every domain of human knowledge simultaneously. Clark's 200M IFRS expert concentrates all its parameters on one domain. The depth comparison is not even close. And when the user's question spans five domains, Clark activates five specialists simultaneously. The dense model guesses from blurred memory. Clark illuminates from focused expertise.

1.6 The Keystone Assumption and Its Empirical Test

The entire Clark model rests on one testable assumption: that a 70B backbone trained for orchestration, routing queries to 100M–500M parameter specialist experts, produces outputs that are more reliable, more verifiable, and more economically efficient than a monolithic model of comparable or greater total parameter count. This is not assumed to be true — it is designed to be empirically tested at Month 4.

MONTH 4 BENCHMARK DESIGN

500 multi-step reasoning tasks across 10 domains: mathematics, legal analysis, financial modelling, medical literature, code architecture, regulatory compliance, scientific analysis, business strategy, engineering design, historical causality. Evaluated on three axes: correctness, traceability (can each reasoning step be audited?), and reliability (same input → same output across 10 runs). Compared against GPT-5.4 and Gemini 2.5 Pro. Go/no-go decision explicitly structured around this benchmark before further capital deployment.

CH 02Founder Doctrinepp. 23–48
2.1 The Founding Team
Maurya Vijayaramachandran
Founder & Chief Architect (CTO)
B.E. Electronics & Communication Engineering + M.S. Artificial Intelligence. The person who conceived the circuit-and-lightbulb architecture and understands every layer from first principles. Four years immersed in AI systems during the transformer scaling era and foundation model revolution. Has built backbone systems on EuroHPC-scale infrastructure before raising external capital — technical credibility that precedes the fundraise. Architecture-first thinking governs every decision Clark makes. The founding insight — that AI failure is architectural rather than parametric — is Maurya's original and defining contribution.
Maddur Subramanyam Krishnaswamy
Co-Founder & CFO
Chartered Accountant (CA) + Master of Commerce (MCom). The economic spine of Clark. Infrastructure companies die most commonly not from bad products but from financial mismanagement — over-scaling hardware before revenue, under-investing in go-to-market at critical inflection points, or structuring capital raises that compromise founder control. Krishnaswamy is the structural protection against all three simultaneously. Responsible for financial architecture, capital strategy, regulatory compliance, and investor relations. Not a support function — an equal co-founder whose discipline gives Clark's technical ambition the durability it needs to become infrastructure.
Founder 3 — CPO
Chief Product Officer
Product architecture and user experience lead. The most sophisticated intelligence infrastructure in the world is worthless if users cannot interact with it naturally. The CPO ensures that the system's extraordinary complexity — a 70B backbone routing to 10,000 experts — is completely invisible to the end user, who experiences only a fast, reliable, structured answer. Five non-negotiable design principles: clarity over aesthetics, outcome-first interfaces, transparency in reasoning, minimal cognitive load, and consistency across all use cases. Every product decision is evaluated against whether it makes the technology more human.
2.2 Equity and Governance
CEO (Maurya) — Post-Seed22.8% · 4-year vest · 1-year cliff · Value at ₹600 Cr post-money: ₹13.68 Crore
CFO (Krishnaswamy) — Post-Seed22.8% · 4-year vest · 1-year cliff · Value at ₹600 Cr post-money: ₹13.68 Crore
CPO — Post-Seed22.8% · 4-year vest · 1-year cliff · Value at ₹600 Cr post-money: ₹13.68 Crore
ESOP Pool7.6% post-seed · 1,000,000 shares · Expanding to 12% at Series A
Seed Investors24.0% · 3,157,894 new preferred shares · ₹144 Cr at ₹600 Cr post-money
Three Non-Negotiable Hire Values1. Intellectual honesty — truth before ego. 2. Ownership mentality — your problem until solved. 3. Bias toward depth — surface solutions are not solutions.
Founder Salaries₹5.5 lakh/month each · well below market for their backgrounds · commitment signal
◆ Volume I — Founder Operations & Legal Setup
MCA incorporation + DPIIT Startup India recognition₹9,00,000
Provisional patent filing on backbone + expert architecture₹7,50,000
Founder advisory & strategic coaching₹12,00,000
Regulatory monitoring & compliance advisory₹6,50,000
Primary market research & customer discovery₹18,00,000
Volume Total₹53,00,000
CH 03Problem Architecturepp. 49–72
3.1 Verbatim User Complaints — What the Market Actually Says

Across students, developers, analysts, founders, and enterprise decision-makers, complaints about current AI systems converge with striking consistency. They are not complaining about speed, access, or intelligence. They are complaining about structure and reliability — which are architectural properties, not parametric ones. Verbatim: 'It gives me an answer but I don't trust it.' 'I still have to check everything manually.' 'It breaks on real problems.' 'I spend more time fixing its output than doing the work myself.' 'It sounds completely confident and is completely wrong.' 'I can't use it for anything that actually matters.' 'I need five tools to finish one task and none of them talk to each other.'

The pattern is unambiguous. Users are not asking for a more capable model. They are asking for a more reliable system. You cannot make a monolithic model more reliable by making it larger — you can only make it more capable on average. Clark's circuit-and-lightbulb architecture is designed specifically for reliability, because reliability requires decomposition, specialisation, routing, and verification. None of these are achievable in a single forward pass through a dense model.

3.2 Five Root-Cause Structural Failures
Failure 1: Single-Pass GenerationDense models generate answers in one forward pass. Complex problems require multi-step reasoning where each step depends on the verified output of the previous. Forcing this into a single pass produces plausible-looking text that is structurally unreliable. Clark's solution: the backbone decomposes and routes. Each expert executes its sub-task with genuine depth. The verification layer checks each step before synthesis.
Failure 2: No Domain SpecialisationA 70B dense model trained on everything is not genuinely expert in anything. When asked an advanced question about protein folding kinetics or SEBI derivative regulations, it produces a coherent-sounding approximation drawn from statistical correlations. Clark's solution: a 300M protein folding expert trained exclusively on biophysics literature, whose answer can be traced to specific citations and verified against experimental results.
Failure 3: No Orchestration LayerCurrent systems have no component responsible for decomposing problems, routing sub-tasks, and synthesising outputs. The user becomes the orchestrator. Clark's solution: the 70B backbone is trained explicitly for orchestration. Routing is its primary and only function.
Failure 4: No Verification Before DeliveryCurrent systems deliver outputs without internally checking them. They have no mechanism for asking 'is this actually correct?' Clark's solution: every expert output passes through the verification layer before synthesis. Contradictions flagged. Claims without grounding marked uncertain. Only verified outputs proceed.
Failure 5: Misaligned IncentivesAI systems are optimised for engagement — session duration, response speed, fluency. Fast, confident, plausible responses score well regardless of factual accuracy. Clark's architecture decouples every design decision from engagement optimisation. Every metric is evaluated against correctness and traceability.
3.3 Economic Quantification
India Direct Cost (Annual)₹50,000+ Crore — verification time, error correction, bad decisions across students and professionals who cannot trust their AI tools
Global Direct Cost (Annual)₹60–100 Trillion — 400 million knowledge workers × average ₹1.5 lakh annual productivity loss from unreliable AI outputs
Second-Order CostsBusiness decisions made on incorrect AI analysis · legal penalties from compliance errors · medical errors from AI-assisted misdiagnosis · failed product launches
Root CauseSystems optimised for engagement at the expense of correctness. The incentive structure of the AI industry rewards plausibility over accuracy.
◆ Volume III — Problem Research & Validation
Customer discovery interviews — 200+ structured sessions₹8,00,000
UX observational research & failure documentation₹4,50,000
Economic cost quantification study₹3,00,000
Volume Total₹15,50,000
CH 04Timing & Macro Alignmentpp. 73–90
4.1 Why Detached MoE Is Buildable Now and Was Not Three Years Ago
GPU Compute Cost Collapse · CoreWeave ↗H100 SXM5 at $4.76/hr. Expert model inference costs fell 5–10× in 36 months. Routing 10,000 experts is now economically viable at enterprise pricing. Three years ago it was not.
Foundation Model Quality ThresholdThe 70B backbone requires general language understanding at a threshold quality before expert routing is reliable. That threshold was crossed in 2023. Below it the backbone cannot reliably decompose problems. Above it the architecture becomes tractable.
Orchestration Framework MaturityMulti-model, multi-step orchestration is production-engineering in 2026. In 2022 it required months of bespoke development. The tooling crossed a threshold that makes Clark buildable by a team of 10.
Expert Model Training Cost CollapseTraining a 300M parameter expert model on a specific domain corpus costs ₹2–15 lakh per model in 2026 compute economics. Building 10,000 bulbs is tractable only at current compute prices.
Data Sovereignty Regulations · DPDP Act ↗ · EU AI Act ↗Both require traceable, auditable outputs and data localisation. Clark's architecture satisfies these requirements structurally. Centralised monolithic models increasingly do not. Regulation is Clark's competitive advantage.
4.2 Regulatory Tailwinds — All Sources Verified
Regulation Jurisdiction Clark Alignment Source
IndiaAI Mission · ₹10,372 Cr India 10,000+ GPU compute · Innovation Centre empanelment target Month 12 PIB Official ↗
India DPDP Act 2023 India Federated architecture = data localisation by design. Indian user data never leaves India. MeitY PDF ↗
EU AI Act 2024/1689 European Union Traceable expert outputs with reasoning chains satisfy Article 13 transparency requirements architecturally. EUR-Lex ↗
DPIIT Startup India India Section 80IAC tax exemption · Angel Tax exemption · Self-certification for compliance. Startup India ↗
IITM Research Park · respark.iitm.ac.in ↗ Chennai India's first university research park · 255+ incubated companies · AI4Bharat proximity · PhD intern pipeline. Physical presence
◆ Volume IV — Regulatory & Timing Research
Regulatory monitoring & compliance advisory₹4,00,000
Government relationship development₹8,00,000
GEM portal empanelment legal fees₹5,00,000
Volume Total₹17,00,000
VOL. II
Market Domination Intelligence
Global TAM architecture · competitive battlefield · customer psychology · inefficiency exploitation
Ch. 5–8 · Pages 91–230Core+
CH 05Market Topologypp. 91–130
5.1 Global TAM — Bottom-Up Construction

Clark's total addressable market cannot be read from any existing market research report because no existing report models the convergence of individual intelligence tools, enterprise reasoning platforms, and API infrastructure into a single architecture. Clark is building the infrastructure layer beneath all three simultaneously.

Layer Population Monetisation Annual TAM ₹ Structural Driver
Individual Users — Global 800M knowledge workers and students worldwide ₹700/month ARPU · 15% paid = 120M users ₹10 Trillion Reliability drives conversion — users who trust outputs pay; those who don't, won't
Enterprise — Global 40M companies · 25% 5-year adoption SMB ₹2L/yr · Mid ₹20L/yr · Enterprise ₹3Cr/yr ₹19.6 Trillion Expert marketplace delivers domain depth no general LLM can match
API & Developer Ecosystem 50M developers · 20M active builders ₹2L/year avg via API + marketplace ₹10 Trillion Applications built on Clark generate ongoing API revenue without direct sales
TOTAL — Global ₹39.6 Trillion / year
5.2 Geographic Expansion — The Circuit Rolls Out
Phase 1: India (Launch)Where the circuit gets wired. 250M students, 60M knowledge workers, 22 languages, cost sensitivity that enforces genuine efficiency. Clark achieves PMF and architectural scale before facing full global competitive pressure.
Phase 2: United StatesHighest enterprise willingness to pay. Deepest API developer ecosystem. English-language expert coverage deepest from training phase. Entered second with proven product.
Phase 3: EuropeEU AI Act compliance demands traceable, auditable outputs — Clark's architecture satisfies this structurally. Data sovereignty laws favour federated design. German, French, Spanish experts registered before entry.
Phase 4: Southeast Asia, Middle East, Latin AmericaEach new market = train jurisdiction experts + register. The circuit doesn't change. You add bulbs.
5.3 Expert Marketplace as Market Creation Mechanism

When Clark opens its expert registration protocol to third parties — universities, enterprises, research institutions, practitioners — the knowledge base grows without Clark paying for that growth. Consider: IIT Bombay's materials science department trains a 200M metallurgy expert and registers it to Clark's backbone. IIT Bombay earns 70% of every API call that routes to their expert. Clark earns 30% with zero development effort. The backbone gets smarter. The marketplace deepens. Users gain access to genuine materials science expertise no monolithic model can match. This is market creation.

5.4 Five-Year Revenue — Three Scenarios
Scenario Year 1 Year 2 Year 3 Year 4 Year 5
Bear ₹8–12 Cr ₹60–80 Cr ₹300–400 Cr ₹700–900 Cr ₹1,000–1,500 Cr
Base ₹15–20 Cr ₹150–200 Cr ₹800–1,000 Cr ₹2,000–2,500 Cr ₹3,000–5,000 Cr
Bull ₹25–35 Cr ₹250–350 Cr ₹1,200–1,500 Cr ₹4,000–5,000 Cr ₹7,000–10,000 Cr
◆ Volume II — Market Research
Market research agency & panel access₹24,00,000
Analyst reports — Gartner, IDC, NASSCOM₹12,00,000
Customer interview programme — 200+ sessions₹5,50,000
Competitive intelligence tooling₹8,00,000
Volume Total₹49,50,000
CH 06Competitive Battlefieldpp. 131–168
6.1 Why No Competitor Can Replicate Clark's Architecture

Clark's competitive situation is architecturally unusual. The difference between Clark and its competitors is not a matter of degree — it is a matter of kind. OpenAI, Google, Anthropic, and Mistral are all building increasingly capable versions of the same architecture: a single dense transformer model, pre-trained on broad data, fine-tuned for specific behaviours. Clark is building a fundamentally different architecture: a routing backbone plus a registry of detached specialist experts. These two approaches are not on the same evolutionary path. You cannot evolve a dense model into Clark's architecture without rebuilding from scratch.

OpenAI — Structure, Strength, Blindspot

Strength: brand dominance, developer ecosystem, fastest model iteration. GPT-5.4 pricing: $2.50/1M input tokens — verified at OpenAI Pricing ↗. Structural blindspot: OpenAI is architecturally committed to scaling dense models. Every research team, every infrastructure investment, every product decision assumes that making one model larger and better-trained is the path to better AI. Pivoting to detached MoE would require dismantling the research programme that defines their identity. It is not a pivot they can make without becoming a different company.

Google DeepMind — Structure, Strength, Blindspot

Strength: distribution across search, productivity tools, cloud, and advertising. Deepest financial resources. Gemini 2.5 Pro pricing: $1.25–$2.50/1M — verified at Gemini Pricing ↗. Structural blindspot: Google's business model depends on search query volume. A system that replaces search with direct expert-level answers reduces advertising inventory. Google must develop AI within the constraint of not fully cannibalising search revenue. This is a permanent structural conflict, not a temporary tension. Clark has no such conflict.

Indian Competitors
Krutrim AI · olakrutrim.com ↗India's first AI unicorn (Jan 2024). Krutrim V2: 12B dense model, 22+ Indic languages. API: ₹7–17/M tokens. A 12B dense model vs Clark's 70B backbone + 10,000 experts is a different architecture class entirely. Comparing them is like comparing a single floodlight to a building with 10,000 specialist bulbs.
Sarvam AI · sarvam.ai ↗Selected for IndiaAI Mission Innovation Centre. Sarvam 105B currently free. Primary focus: speech, voice, translation. Genuinely complementary to Clark — Sarvam handles voice/language communication, Clark handles reasoning and intelligence. Strong potential partnership and expert model contributor.
AI4Bharat · ai4bharat.iitm.ac.in ↗Open-source NLP at IIT Madras. IndicTrans2, IndicWhisper, all open-source. No commercial managed API — academic research only. Natural candidate for expert model contributions to Clark's registry. Potential strategic partner, not a competitive threat.
6.2 Competitive Response Playbook — 8 Scenarios
Scenario 1: Price UndercuttingRespond with cost-per-correct-outcome analysis. A 40% price reduction is irrelevant if Clark's expert routing eliminates 60% of verification overhead. The value axis is different.
Scenario 2: Incumbent Announces MoE ProductExamine carefully. Internal MoE (routing between parameter blocks within one model) is fundamentally different from Clark's detached MoE — independently trained, independently certifiable domain experts. Communicate the distinction clearly.
Scenario 3: Open-Source Expert Models ProliferateClark's value is not in any individual expert model — it is in the backbone's routing quality and the certified registry. Open-source models can contribute to Clark's registry through the federated protocol. Open-source is a supply chain, not a threat.
Scenario 4: Key Investor Withdraws Mid-RoundBridge protocol: founders inject personal capital for 60 days. CFO activates backup investor pipeline maintained with minimum 3 warm alternatives at all times. Round closes with replacement lead.
Scenario 5: Regulatory Action Against AIClark's architecture is the regulatory solution. Traceable expert outputs satisfy every transparency requirement articulated by AI regulators globally. Regulatory tightening is Clark's competitive advantage.
Scenario 6: Talent Poaching4-year vesting, competitive equity, IITM proximity for research talent, and mission-level work. The people who build Clark's backbone understand what they are building — that understanding is not easily transferred.
Scenario 7: Expert Marketplace DisintermediationA competing marketplace without Clark's routing quality is a directory of bulbs without a functioning circuit. Backbone routing quality requires years of deployment data — it cannot be purchased or replicated quickly.
Scenario 8: Big Tech Acquisition of Indian AI CompetitorDeepen India-sovereign positioning and government customer relationships. Big tech regulatory exposure in India creates structural advantage for India-first infrastructure.
CH 07Customer Intelligencepp. 169–210
7.1 Primary User — The Precision-Seeker

Clark's primary user is defined by mindset rather than demographics: precision-seeking, efficiency-obsessed, with zero tolerance for confident-sounding incorrect answers. This user has been burned by AI-generated outputs that looked right and were wrong. They have paid the cost — in time, in credibility, in bad decisions — of trusting a system that was not actually reliable. Three defining fears drive their behaviour: the fear of invisible mistakes in plausible-looking outputs; the fear of being outpaced by peers with more capable tools; and the fear of losing ownership of their own thinking process. Clark's architecture addresses all three: traceable expert outputs eliminate invisible mistakes; genuine expert-level depth provides competitive advantage; the system amplifies rather than replaces judgment by showing its reasoning chain.

7.2 Enterprise Buyer — What They Actually Purchase

Enterprise budget holders — CTOs, Heads of Operations, Chief Product Officers — are not buying AI capability. They are buying three things: operational leverage (more output with the same headcount), risk reduction (AI-assisted decisions that can be audited and defended), and strategic competitive advantage (the ability to deploy genuine domain expertise across the organisation without hiring 100 additional specialists). Clark's expert marketplace addresses the third point in a way no competitor can. An enterprise can access 10,000 domain experts simultaneously at a fraction of the cost of maintaining even 10 in-house specialists.

7.3 Five Decision Triggers That Create Buyers
Trigger 1: High-Stakes AI FailureA critical project where the AI produces an incorrect output at the moment reliability matters most. The cost becomes immediately concrete and personal. Converts passive dissatisfied user into active buyer.
Trigger 2: The Expert Gap DiscoveryA user submits a genuinely advanced domain question and discovers the incumbent system produces a confident, plausible, and factually incorrect answer. The system is pretending to expertise it does not have.
Trigger 3: Peer Workflow ComparisonObserving a colleague with materially better outputs from a more reliable system. Direct personal comparison. Creates both urgency and clear decision criteria.
Trigger 4: Regulatory Audit TriggerEnterprise receives a regulatory inquiry about an AI-assisted decision and cannot produce a reasoning chain. The compliance gap becomes a procurement event.
Trigger 5: Scaling PainTeam tries to extend AI to specialist domains — legal, medical, engineering — and discovers the current system cannot deliver expert-level depth. Active search for specialised infrastructure begins.
7.4 LTV by Segment
Segment Monthly Revenue Gross Margin CAC LTV LTV:CAC Key Retention Driver
Developer self-serve ₹15,000 95% ₹2,000 ₹90,000 45× API integration depth — workflows depending on Clark's experts raise switching cost
SME API mid-tier ₹1,50,000 93% ₹15,000 ₹9,00,000 60× Domain expert quality — SMEs cannot afford in-house experts; Clark provides 10,000
Enterprise annual ₹15,00,000 88% ₹2,50,000 ₹1,80,00,000 72× Expert certification + workflow integration make migration structurally painful
Govt / PSU custom ₹60,00,000 82% ₹5,00,000 ₹7,20,00,000 144× Multi-year empanelment contracts with renewal assumption
CH 08Market Inefficiency Exploitationpp. 211–230
8.1 The Cost Gap — Expert Knowledge at Compute Prices

The current market charges 5–10× the actual cost of delivering expert-level intelligence. A chartered accountant charges ₹3,000–15,000/hour for IFRS compliance analysis. Clark's IFRS expert delivers comparable structured analysis at ₹150–500 per query. A patent attorney charges ₹8,000–25,000/hour for prior art analysis. Clark's patent law expert and chemistry expert, co-activated by the backbone for a pharmaceutical patent query, deliver traceable, citable analysis at ₹500–2,000 per query. The gap exists not because technology is expensive — Clark's inference cost for a 300M expert is fractions of a rupee — but because specialist knowledge was previously locked behind years of human specialisation.

8.2 The Five Inefficiencies Clark Targets
Inefficiency 1: Expert Knowledge Lock-inExpert knowledge is locked inside individual human minds with limited scaling potential. Clark's expert models scale expert knowledge to unlimited simultaneous deployment at near-zero marginal cost.
Inefficiency 2: Multi-Tool FragmentationKnowledge workers use 4–8 specialised AI tools for different domains, reconstructing context manually between each. Clark's backbone handles cross-domain coordination automatically.
Inefficiency 3: The Reliability TaxUsers of current AI systems spend 30–60% of AI-assisted work time verifying, correcting, and fact-checking outputs. Clark's verification layer eliminates this tax structurally.
Inefficiency 4: Language and Jurisdiction BarriersGlobal AI systems are predominantly English-language and US-jurisdiction-optimised. Clark's federated expert model allows native-language, native-jurisdiction experts for every market from launch.
Inefficiency 5: Insight Without Action GapCurrent systems stop at generating information. They do not decompose it into actionable steps, route to the right expert for each step, or synthesise into a decision-ready format. Clark closes this gap.
VOL. III
Product & Technology Supremacy
The 70B backbone · expert model architecture · training schedule · MLOps · security · scalability
Ch. 9–12 · Pages 231–390Technical+
CH 09Product Architecturepp. 231–275
9.1 The Full System — Circuit to Output
Data Flow — Query to Response
QUERY INGESTION
Raw input parsed into structured problem specification — intent, constraints, domain signals, sub-problems. Ambiguity resolved before the circuit activates.
BACKBONE ROUTING (70B)
The circuit. Problem decomposed. Expert activation plan generated — which bulbs to switch on, sequence or parallel, with what sub-tasks. Clark's proprietary core.
EXPERT EXECUTION (10K models)
Activated experts (3–12 per query) execute independently. Each 100M–500M parameters. Completely detached from each other. Only connected to the circuit.
VERIFICATION LAYER
Expert outputs checked: internal consistency, logical correctness, factual grounding, cross-expert coherence. Contradictions resolved. Only verified outputs reach synthesis.
SYNTHESIS
Backbone synthesises verified expert outputs into one coherent, traceable response. Every conclusion tagged to its contributing expert. Reasoning chain visible and auditable.
FEEDBACK LOOP
Each interaction improves backbone routing precision. New experts registered and certified continuously. More capable by adding bulbs — without rewiring the circuit.
9.2 The Backbone — What the 70B Model Is Trained To Do

The backbone does not answer questions. It is trained to understand the structure of questions — to recognise what kind of problem has arrived, what sub-problems it can be decomposed into, which specialist domains each sub-problem requires, and how the outputs of those specialists should be synthesised. Think of the backbone as a highly experienced project manager who has worked across every domain of human knowledge. They do not need to know how to perform brain surgery — they need to know enough about brain surgery to recognise when a brain surgeon is needed, to understand the surgeon's output, and to integrate it with the radiologist's and pharmacologist's outputs. The backbone's training data is problem-decomposition patterns, expert-output evaluation criteria, and multi-domain synthesis — not domain content itself.

9.3 The Expert Models — Architecture and Training Protocol
Parameter Range100M to 500M per expert. Narrow, highly structured domains (specific regulatory frameworks, well-defined mathematical subfields) achieve deep competence at 100M–200M. Broader complex domains (general medicine, full-stack engineering) require 300M–500M.
Training DataEach expert trained exclusively on domain-specific authoritative sources: textbooks, peer-reviewed literature, official regulatory documents, case law, technical standards. No general web crawl. No cross-domain contamination.
Certification ProtocolBefore registration: domain-specific accuracy benchmarking, out-of-domain refusal tests (the expert must correctly decline questions outside its scope), and consistency tests (same input → same output across 10 runs). All three must pass before production registration.
Detachment PrincipleExperts have no knowledge of each other. They receive a sub-task from the backbone, execute within their domain, and return their output. Cross-expert coherence is the backbone's responsibility exclusively.
Update ProtocolExpert models can be retrained, improved, and updated without affecting the backbone or any other expert. Updating an expert is like replacing a lightbulb — the circuit continues; only the illumination quality of that socket changes.
Contribution ProtocolThird-party contributors submit via developer portal. Automated certification → human expert review → staged deployment (alpha → beta → production). End-to-end: 3–4 weeks. Quality is non-negotiable — the marketplace's value depends entirely on the circuit routing to bulbs that actually illuminate correctly.
9.4 Product Roadmap — Four Phases
Phase 1 (Mo. 0–6): Backbone Training70B backbone trained and validated. First 100 expert models trained internally across core domains. Internal API alpha Month 4. Month 4 benchmark vs GPT-5.4 and Gemini 2.5 Pro.
Phase 2 (Mo. 6–18): Expert Expansion & BetaExpert registry grows from 100 to 1,000+ models. Public API beta launches Month 7. Expert registration portal opens Month 10. Revenue scaling. Backbone routing quality improves with deployment data.
Phase 3 (Mo. 18–36): Marketplace & PlatformExpert registry reaches 5,000+ models. Third-party contributors earning real revenue. US market entry. Platform transition: Clark is no longer a product — it is infrastructure other products are built on.
Phase 4 (Mo. 36–60): Global Infrastructure LayerExpert registry approaches 10,000+ models across all major languages, jurisdictions, and domains. Clark is evaluated as a dependency, not a tool. IPO preparation. The circuit is wired into the global economy.
◆ Volume III — Technology Budget (24 months)
GPU Rental — Training phase (256×H100 × 6 months)₹16,58,88,000
GPU Rental — Inference phase (72×H100 × 18 months)₹13,99,68,000
Server & storage hardware₹79,15,000
Software licenses — GitHub, Jira, W&B, Slack etc.₹1,21,60,943
Dataset licensing — training corpora₹1,00,00,000
Security / DevOps / SOC 2₹2,57,44,962
Volume Total₹36,16,76,905
CH 10Technology Stackpp. 276–320
10.1 Full Infrastructure Design
GPU Compute · CoreWeave ↗NVIDIA H100 SXM5 80GB · $4.76/hr · ₹150/hr at ₹84 FX rate · pure OpEx · Kubernetes-managed autoscaling · 256 GPUs Mo.0–5 · 72 GPUs Mo.6–23
StorageHybrid: S3-compatible object store (training data) · PostgreSQL (structured metadata) · Qdrant vector DB (semantic retrieval) · Redis (session cache + rate limiting) · data residency controls for DPDP compliance
Languages & FrameworksPython (ML/research) · Go (inference serving) · Rust (backbone routing engine — microsecond decisions require native performance) · PyTorch (BSD) · Hugging Face Transformers (Apache 2.0) · FastAPI (MIT)
MLOpsWeights & Biases experiment tracking · GitHub Actions CI/CD · automated benchmark tests on every checkpoint · deployment gated on accuracy threshold · drift detection for backbone routing quality and expert model performance
Expert Registry InfrastructureCustom registry service managing 10,000+ expert model metadata: domain scope, benchmark scores, version history, contributor attribution, revenue share tracking · routing index optimised for sub-millisecond selection across 10,000+ candidates
10.2 Model Training Schedule
Model Scope Training Start Key Benchmark GPU Budget
1B Baseline General language validation Month 2 Perplexity < 15 ₹55,00,000
7B Backbone Problem decomposition capability Month 3 MMLU > 60% · decomposition accuracy > 70% ₹1,85,00,000
30B Backbone Expert routing precision Month 5 MMLU > 68% · routing accuracy > 82% ₹4,20,00,000
70B Backbone (Final) Full orchestration capability Month 6 MMLU > 74% · routing accuracy > 91% ₹6,80,00,000
First 100 Expert Models Core domains (law, finance, science, languages) Month 1–6 Domain accuracy > 85% per cert. battery ₹3,50,00,000
10.3 DPDP Compliance Architecture
Primary Reference · MeitY PDF ↗Digital Personal Data Protection Act 2023. Data fiduciary obligations, consent management, data localisation, Data Protection Board reporting.
Data LocalisationIndian user data processed and stored within India. Indian-jurisdiction experts run on Indian infrastructure. Structurally compliant — not procedurally retrofitted.
Audit TrailsAll data access events logged with purpose, actor, and timestamp. Right to erasure: cascading deletion across all storage layers within 72 hours. Implemented as first-class API endpoints.
CH 11Scalability Engineeringpp. 321–350
11.1 Scaling the Circuit

Clark's routing infrastructure is stateless — backbone model weights are read-only after training. Every routing request can be handled by any available backbone inference node without session-specific state. Horizontal scaling = adding nodes with no architectural changes. Expert models are served through a similar stateless layer. Only a small subset (3–12 experts per query) activates per request, keeping per-query compute cost low regardless of total registry size.

11.2 Scaling the Registry — Expert Model Management at 10,000+
Expert Index ArchitectureA custom vector index maps problem descriptions to expert candidates with sub-millisecond lookup time. Scales to 100,000+ experts without performance degradation.
Expert Model ServingEach expert is an isolated inference service with a defined API contract. Can be scaled independently based on demand. A popular IFRS expert during quarterly reporting scales up without affecting any other expert.
Expert Quality MonitoringContinuous monitoring for accuracy drift, scope adherence, and consistency. Quality degradation triggers automatic routing downgrade and contributor notification.
New Expert OnboardingAutomated certification (72 hours) → human expert review (1–2 weeks) → staged deployment (alpha → beta → production). Total: 3–4 weeks end-to-end.
11.3 Capacity Planning
Scale Active Users Monthly Revenue Infrastructure Cost Gross Margin
Month 7 (Beta) 100 ₹2.0 L ₹1.81 Cr (GPU fixed cost dominates) Negative
Month 12 10,000 ₹80.0 L ₹2.68 Cr ~70%
Month 18 100,000 ₹3.20 Cr ₹2.75 Cr ~86%
Month 24 500,000 ₹6.00 Cr ₹2.77 Cr ~88%
Year 5 50,000,000 ₹417 Cr ₹140 Cr ~91%
CH 12Security & Reliabilitypp. 351–390
12.1 STRIDE Threat Model
SpoofingJWT auth · API key rotation · OAuth 2.0 SSO · device fingerprinting
TamperingHMAC verification · cryptographic integrity on expert model weights · immutable audit logs
RepudiationNon-repudiable audit trail with cryptographic timestamps for all data operations and expert activations
Information DisclosureAES-256 at rest · TLS 1.3 in transit · column-level encryption for PII · zero-trust network segmentation between expert models
Denial of ServicePer-key rate limiting · circuit breakers · queue buffering · auto-scaling · DDoS protection at network edge
Elevation of PrivilegeRBAC with minimum permissions · zero-trust architecture · quarterly access review · privileged access workstations for admin
12.2 SOC 2 and Compliance Roadmap
Milestone Target Month Outcome
SOC 2 Type I Initiated Month 9 Audit firm engaged · gap analysis complete · controls documented
SOC 2 Type I Received Month 12 Certificate issued · used in enterprise sales process
SOC 2 Type II Initiated Month 12 6-month operational evidence period begins
SOC 2 Type II Received Month 18 Enterprise procurement requirement satisfied
ISO 27001 Certification Month 24 Global enterprise procurement requirement satisfied
◆ Volume XII — Security & Compliance
SOC 2 Type I + II audit fees₹28,00,000
ISO 27001 certification₹12,00,000
Annual penetration testing₹8,00,000
Security tooling — SIEM, endpoint₹18,00,000
Bug bounty programme fund₹10,00,000
Privacy & compliance legal advisory₹15,00,000
Volume Total₹91,00,000
VOL. IV
Economic Engineering
Revenue architecture · unit economics · financial model · cap table · valuation framework
Ch. 13–15 · Pages 391–500Financial+
CH 13Revenue Architecturepp. 391–430
13.1 Pricing Philosophy — Expert Value, Not Token Volume

Clark's pricing is anchored to the economic value of expert-level, verifiable output — not tokens consumed. If Clark's IFRS expert replaces ₹15,000/hour chartered accountant time for structured compliance analysis, pricing is anchored at ₹1,000–5,000 per complex query — a fraction of the displaced value, while maintaining 85%+ gross margin. This reframes the conversation entirely: users are not comparing Clark's price to OpenAI's price per million tokens. They are comparing Clark's price to the professional services cost it replaces. Clark wins that conversation decisively.

13.2 Four Revenue Tiers
Free TierLimited backbone queries · access to 50 general-domain experts · no certified premium specialists. Conversion trigger: user encounters a task requiring specialist depth beyond the free tier ceiling.
Growth Tier (₹300–800/month)Full expert registry · 50,000 backbone-routed tokens/month · Target: India's 250M students and 60M knowledge workers needing genuine specialist knowledge.
Pro Tier (₹2,000–5,000/month)Unlimited expert registry · priority routing · persistent context memory · API access · custom expert request queue.
Enterprise Tier (₹8,000–40,000/month)Custom expert model development · dedicated backbone allocation · SLA guarantees · audit trail exports · dedicated CSM · custom integration.
13.3 All Six Revenue Streams
Subscription RevenueRecurring MRR from Growth, Pro, and Enterprise tiers. Predictable foundation.
API Usage RevenuePer-token and per-expert-activation pricing for high-volume programmatic access. Scales without proportional cost increase.
Expert Marketplace Commission30% of revenue from every third-party expert model activation. Scales without Clark's development investment. Expected to be the largest revenue stream by Year 5.
Custom Expert DevelopmentEnterprise pays Clark to train domain experts on proprietary data. ₹5L–50L per project. Creates structural dependency — the custom expert can only be deployed through Clark's circuit.
Professional ServicesIntegration consulting, enterprise deployment, expert registry design. ₹5L–50L per engagement.
Expert Certification ServicesDomain experts and enterprises pay for certification and quality review that enables production registry listing.
◆ Volume IV — Finance Operations
Statutory audit & tax advisory₹14,00,000
Financial modelling & FP&A tooling₹8,00,000
Investor relations & data room setup₹6,00,000
CFO-as-a-service (fractional, Year 1)₹36,00,000
Volume Total₹64,00,000
CH 14Unit Economicspp. 431–460
14.1 Full Cost Structure
GPU Inference Cost (Training Mo.0–5)₹2,76,48,000/month · largest single cost · ends Month 5 · largest fixed expense in company history
GPU Inference Cost (Production Mo.6+)₹77,76,000/month · 72% reduction from training phase · fixed while revenue scales
COGS at ScaleCompute 60% · Storage & networking 20% · Operational overhead 20% · Blended COGS ≈ ₹20–30 per ₹100 revenue
Gross Margin TrajectoryMonth 7: negative · Month 12: ~70% · Month 18: ~86% · Month 24+: ~88% sustained
Expert Activation Cost₹0.05–0.25 per expert activation · at 5 experts per average query: ₹0.25–1.25/query · well below query pricing in all tiers
14.2 Key Metrics Targets
LTV:CAC8× or higher · Enterprise: 72–144× · Primary LTV lever: expert marketplace creates expansion revenue without proportional CAC
Gross Margin75–95% depending on tier · developer self-serve at 95% · government custom at 82%
Burn Multiple Target< 1.5× during growth phases · currently higher due to upfront training investment · falls sharply Month 6
NRR Target130%+ in Year 3 · primary expansion driver: expert registry growth means users naturally access more experts over time
CH 15Financial Modellingpp. 461–500
15.1 Cap Table — Pre-Seed to Post-Seed
Shareholder Shares % Post-Seed Value @ ₹600 Cr Notes
Founder 1 (CEO — Maurya) 30,00,000 10.1% ₹13.68 Cr 4-yr vest · 1-yr cliff
Founder 2 (CFO — Krishnaswamy) 30,00,000 10.1% ₹13.68 Cr 4-yr vest · 1-yr cliff
ESOP Pool 10,00,000 7.6% ₹4.56 Cr Expanding to 12% at Series A
Seed Investors 31,57,894 35.0%
15.2 Valuation Framework — Three Methods Reconciled
DCF MethodBase case cash flows discounted at 35% WACC. Implied value range: ₹400–800 Crore. Primary sensitivity: expert marketplace revenue materialisation timeline.
Comparable AnalysisSarvam AI Series A at ~₹432 Cr post-money · Krutrim at $1B+ · US AI infrastructure companies at 20–40× ARR. Clark's architectural differentiation supports premium to dense-model-only comparables.
VC MethodExpected exit Year 5–7: ₹20,000–60,000 Crore at 10–15× ARR on base-case revenue. Required 10× return on seed implies ₹400–800 Cr current valuation. ₹600 Cr post-money is within range.
Reconciled₹600 Crore post-money is defensible across all three methods under base-case assumptions. Expert marketplace network effects are the primary upside optionality.
VOL. V
Go-To-Market Warfare
Distribution strategy · sales architecture · PLG design · expert contributor flywheel
Ch. 16–18 · Pages 501–610Revenue+
CH 16Distribution Strategypp. 501–545
16.1 The Expert Contributor Flywheel — Primary Distribution Engine

Clark's most powerful distribution mechanism requires no paid marketing spend: the expert contributor programme. When a domain expert trains and registers an expert model, they receive 70% of every API call that routes to their model. This creates an immediate economic incentive to promote Clark's platform within their professional network. A tax attorney who registers a GST compliance expert and earns ₹50,000/month in passive API revenue becomes an advocate for Clark in every professional conversation. More expert contributors → more professional network exposure → more users discovered → more API revenue → more expert contributors attracted. This flywheel is self-reinforcing and cannot be purchased by a competitor — it must be earned through having the circuit first.

16.2 Five Partnerships That 10× Distribution
1. AI4Bharat · ai4bharat.iitm.ac.in ↗Open-source Indic language models as expert contributions. Joint research on Indic expert training. IITM proximity makes this a relationship, not a formal negotiation.
2. ICAI (Institute of Chartered Accountants)400,000+ CAs. Expert model certification partnership. Clark's accounting expert registry developed and validated with ICAI technical input. Distribution through ICAI continuing education channels.
3. Bar Council of IndiaIndian law expert registry certified with Bar Council input. Distribution through state bar associations. Clark becomes the standard AI research tool for the Indian legal profession.
4. Zoho / FreshworksClark's backbone embedded into Zoho CRM and Freshworks workflows. Distribution to Zoho's 100M+ users without direct sales effort.
5. Jio PlatformsBundling Clark's Growth tier with JioFiber and JioAirFiber premium plans. 200M+ potential users at near-zero CAC.
◆ Volume V — Go-To-Market Budget
Performance marketing & paid acquisition₹1,20,00,000
Content creation & SEO programme₹28,00,000
Sales team (4 AEs + 2 SDRs + Manager)₹1,20,00,000
Events, conferences & field marketing₹22,00,000
PR agency & analyst relations₹24,00,000
Expert contributor outreach programme₹18,00,000
Volume Total₹3,32,00,000
CH 17Sales Architecturepp. 546–578
17.1 Three Sales Motions
Motion 1: Self-Serve PLGZero human involvement. Product drives acquisition, activation, and conversion. Users reach first value moment (first expert-routed query) within 5 minutes, convert to paid when hitting the free tier ceiling on a high-stakes task.
Motion 2: Inside Sales (SME + Mid-Market)Account executives manage inbound leads. Average deal: ₹1.5–20 lakh annually. Sales cycle: 2–6 weeks. Qualification: does the prospect have domain-specific needs requiring expert routing depth?
Motion 3: Field Sales + Expert Marketplace (Enterprise + Govt)Dedicated account teams with solution architects. Enterprise deal: ₹20L–3Cr. Government: ₹50L–10Cr. Sales cycle: 3–9 months. Custom expert model development often included as deal accelerator.
CH 18Growth Systemspp. 579–610
18.1 Three Network Effects Operating Simultaneously
Data Network EffectEach query improves backbone routing precision. More queries → better routing → higher quality → more queries. Private to Clark — requires running the production system to generate.
Expert Marketplace Network EffectEach new expert makes the platform more valuable to all users. Platform with 10,000 experts is not 10× more valuable than one with 1,000 — it is exponentially more valuable because cross-domain routing possibilities grow combinatorially.
Social Network EffectEach expert contributor brings their professional network. Each satisfied enterprise customer becomes a reference. Word-of-mouth within professional communities propagates with high conversion rates because domain-specific quality claims are easily verifiable by peers.
VOL. VI
Organisational Scaling
Team architecture · hiring plan · culture · execution systems · KPIs
Ch. 19–20 · Pages 611–690People+
CH 19Team Architecturepp. 611–645
19.1 Hiring Philosophy

There is a philosophical resonance between Clark's product architecture and its hiring philosophy: Clark builds a system where specialist experts are orchestrated by a generalising backbone. Clark hires specialists orchestrated by a leadership backbone. The hiring principle is the same: hire people who are genuinely expert in a narrow domain rather than generally capable across many. A candidate who 'knows a bit about' ML infrastructure, security, and frontend is less valuable than a candidate who knows ML infrastructure at a depth where they have made original contributions.

19.2 Month-by-Month Hiring Plan — 0 to 100 FTEs
Month New Hires Total Key Roles Added Monthly Payroll
Mo.0 3 3 CEO · CTO · CPO ₹16,50,000
Mo.1 4 7 Lead ML Eng ×2 · Research Scientist ×2 ₹21,30,000
Mo.2 9 16 ML Eng ×3 · Data Eng ×2 · Platform Eng · DevOps · Security · Legal ₹26,10,000
Mo.3 11 27 Research Eng ×2 · Finance · HR · QA · PM ×2 ₹37,90,000
Mo.4 10 37 Research Intern ×2 · Frontend · ML ×3 · Data ×2 ₹47,50,000
Mo.5 8 45 DB Eng · Network · Data Sci ×2 · CSM · Marketing · BizDev ₹55,10,000
Mo.6 6 51 UX · Enterprise AE ×2 · Customer Success · Infra Eng ₹64,30,000
Mo.7 6 57 Tech Writer · AE · Content · CS Manager · DevRel ₹70,50,000
Mo.10 8 80 Marketing × 2 · Research Eng ×2 · Analytics · BizDev ×2 ₹98,90,000
Mo.12 3 90 QA · Treasury Analyst · Procurement Manager ₹1,33,20,000
Mo.17 10 100 Research Scientists · ML Engineers · Sales · Operations ₹1,69,70,000
◆ Volume VI — Salaries (24-month total)
Total salaries across all 100 FTEs + founders₹26,63,70,000
EPF + Gratuity statutory contributions₹2,23,85,311
AI Research dept (29 FTEs) — largest dept by payroll₹18,63,79,000 (30.3% of total)
Volume Total₹28,87,55,311
CH 20Execution Systemspp. 646–690
20.1 12 KPIs That Mean Everything Is Working
KPI Month 12 Target Month 24 Target Why It Matters
Monthly Recurring Revenue ₹80 L ₹6 Cr Primary revenue health
Expert Models Registered 500+ 5,000+ Marketplace growth and depth
Expert Routing Accuracy > 88% > 93% Core architecture performance
Backbone Routing Latency (P95) < 800ms < 500ms User experience quality
Enterprise Logo Count 18 100+ B2B adoption velocity
Net Revenue Retention > 110% > 125% Expansion revenue health
Expert Contributor Revenue Share Paid ₹5L+ ₹50L+ Flywheel activation signal
Free-to-Paid Conversion Rate 8%+ 12%+ Monetisation efficiency
Gross Margin 70%+ 88%+ Unit economics health
Burn Multiple < 3× < 1.5× Capital efficiency
Months of Runway Remaining 18+ 24+ (post-Series A) Investor confidence
Employee NPS > 50 > 60 Culture and retention
VOL. VII
Defensibility & Moat Construction
Routing intelligence · expert registry depth · switching costs · IP portfolio · risk architecture
Ch. 21–22 · Pages 691–760Moat+
CH 21The Competitive Moatpp. 691–725
21.1 Six Structural Moat Layers
Moat 1: Routing IntelligenceThe backbone's routing quality improves with every query. After 10M queries, Clark knows with precision which combination of experts resolves which category of problem. This routing intelligence requires years of deployment at scale to develop. No competitor can purchase it — it must be earned.
Moat 2: Expert Registry DepthA registry of 10,000 certified expert models is not a database — it is a decade of curation work. Each model required domain corpus collection, training, certification testing, human expert review, staged deployment, and ongoing quality monitoring. This registry is Clark's deepest IP asset.
Moat 3: Workflow IntegrationEnterprise customers who integrate Clark's expert routing into compliance workflows, research processes, and decision support systems accumulate switching costs that compound with integration depth. After 12 months of deep integration, migration is not inconvenient — it is operationally disruptive.
Moat 4: Federated Contributor NetworkContributors are invested in Clark's success. They have trained models, built professional reputations around their expertise in the registry, and earn ongoing revenue. They actively advocate for Clark. This social investment cannot be replicated by writing a cheque.
Moat 5: Patent PortfolioProvisional patent filed Month 3 covering the hierarchically detached federated MoE architecture, backbone routing protocol, and expert certification system. 15 filings planned across India, US, EU, UK jurisdictions over 3 years.
Moat 6: Regulatory Compliance by DesignClark's federated, traceable, auditable architecture satisfies DPDP, EU AI Act, and enterprise compliance requirements structurally. Retrofitting compliance onto architectures designed without it is expensive, slow, and awkward. Clark's compliance is architectural and therefore permanent.

Clark's deepest moat is the combination of routing intelligence and expert registry depth. Both are earned through deployment. Both compound with time. Together they create a competitive position that widens every day the system is used.

CH 22Risk Architecturepp. 726–760
22.1 Risk Register
HIGH PRIORITY
Keystone Assumption Failure
70B backbone cannot reliably route to correct experts for complex multi-domain queries. Probability: 15%. Mitigation: Month 4 benchmark — explicit go/no-go decision before further capital deployment.
HIGH PRIORITY
Big Tech Executes Detached MoE
Google or OpenAI executes similar architecture. Probability: 25% in 36 months. Mitigation: accelerate expert registry to 5,000+ before competitor can build from zero; patent portfolio; ecosystem lock-in.
MEDIUM PRIORITY
GPU Price Increase
33% H100 price increase adds ₹10.2 Cr to planned spend. Probability: 30%. Mitigation: multi-cloud strategy, reserved pricing at Month 6, buffer reserve sized for this scenario.
MEDIUM PRIORITY
Expert Marketplace Growth Below Target
Third-party contributors fail to join at projected rates. Probability: 35%. Mitigation: direct outreach to 500 domain experts pre-launch; 70/30 revenue split; internal team produces 1,000 models before marketplace opens.
MEDIUM PRIORITY
Key Founder Departure
Loss of a founder during seed period. Probability: 10%. Mitigation: 4-year vesting with 1-year cliff; operational resilience through distributed knowledge ownership.
LOW PRIORITY
Regulatory Action Against AI Systems
Regulatory action specifically targeting AI orchestration. Probability: 8%. Mitigation: Clark's architecture is the regulatory solution — traceable, auditable, sovereign. Proactive MeitY and IndiaAI Mission engagement.
◆ Volume VII — Legal, IP & Risk
Patent filing programme — 15 patents over 3 years₹45,00,000
Corporate legal retainer₹24,00,000
Insurance — D&O, cyber, E&O₹18,00,000
Regulatory compliance advisory₹12,00,000
Volume Total₹99,00,000
VOL. VIII
Fundraising & Investor Relations
Seed round mechanics · capital strategy · investor targeting · Series A preparation
Ch. 23–24 · Pages 761–810Capital+
CH 23Capital Strategypp. 761–785
23.1 Seed Round Structure
Round Size₹144 Crore (₹1,440,000,000)
Post-Money Valuation₹600 Crore
Investor Stake24% — 3,157,894 new preferred shares
Liquidation Preference1× non-participating — standard for India seed institutional rounds
Anti-DilutionBroad-based weighted average — full ratchet not accepted
Board Composition3 seats: CEO + Lead Investor + Independent Director (AI domain expert)
Use of Funds47.75% deployed across 10 operational categories · 52.25% strategic buffer reserve
Series A TriggerMonth 18 · Conditions: 70B backbone deployed · ≥18 enterprise customers · ARR ≥ ₹50 Cr · registry ≥ 2,000 models · SOC 2 Type II received
23.2 Target Investor List
Blume Ventures · blume.vc ↗Deep tech focused · Chennai ecosystem presence · Portfolio: Atomicwork, E2E Networks, Neysa · Seed stage thesis alignment
Peak XV Partners · peakxv.com ↗Led Sarvam AI Series A · Deep AI thesis · High-value-add for enterprise go-to-market
Pi Ventures · pi.vc ↗Deep tech AI specialist · Active in India AI infrastructure · Highest technical thesis alignment
Accel India · accel.com ↗34 deals in 2025 · Active AI portfolio · Strong US enterprise network for Phase 2
Lightspeed India · lsvp.com ↗Co-led Sarvam AI Series A · Signals India AI infrastructure commitment
Endiya Partners · endiya.com ↗Deep tech specialist · SigTuple portfolio · Patient capital aligned with 24-month pre-revenue period
CH 24Investor Targeting & Due Diligencepp. 786–810
24.1 Pre-Written Objection Handling
"Too early for this architecture"The architecture is not early — it would have been impossible earlier. All six enabling constraints collapsed simultaneously in 2023–2025. This is not a vision — it is an implementation of what is now buildable.
"OpenAI will build this"OpenAI is architecturally committed to scaling dense models. Pivoting to detached MoE requires dismantling the research programme that defines their identity and abandoning the infrastructure that represents their primary capital investment.
"Expert marketplace is unproven"Three analogous marketplaces have succeeded: iOS App Store, AWS Marketplace, Hugging Face. Clark's marketplace has the additional advantage that experts are complementary, not competitive — more bulbs make the circuit more valuable for every user.
"₹600 Cr post-money is too high"Comparables: Sarvam AI Series A at ~₹432 Cr (single dense model, limited languages). Krutrim at $1B+ (12B parameter model). Clark at ₹600 Cr with 70B backbone in training and a fundamentally different and larger architectural thesis.
"Show me traction first"The backbone is in training. First paying customer: Month 7. The ask is for capital to complete the training run and deploy — not to validate a hypothesis, but to deploy an architecture that is already proven buildable.
VOL. IX
Seed Round War Room
Pitch architecture · investor psychology · objection handling · close mechanics
Ch. 25–27 · Pages 811–870Raise+
CH 25Pitch Architecturepp. 811–835
25.1 The 10-Slide Narrative
Slide 1: The ProblemThe AI industry optimises for engagement, not correctness. $60 trillion in global productivity loss annually from unreliable AI outputs.
Slide 2: The ArchitectureThe circuit and the lightbulbs. 70B backbone. 10,000 experts (100M–500M params each). 5+ trillion total parameter coverage.
Slide 3: Why NowSix constraints collapsed simultaneously in 2023–2025. Buildable now. Window closes in 24 months.
Slide 4: The Market₹39.6T TAM across three global layers. India first. Expert marketplace creates the third layer no competitor has.
Slide 5: The ProductLive demonstration: IFRS accounting expert vs GPT-5.4. Same query. Traceable chain vs plausible approximation.
Slide 6: The MoatRouting intelligence compounds with queries. Registry depth compounds with contributors. Switching costs compound with integration.
Slide 7: Go-to-MarketExpert contributor flywheel as primary distribution. Professional association partnerships. India → US → Europe.
Slide 8: Business ModelSix revenue streams. Expert marketplace as primary growth driver at scale. 70/30 revenue share drives contributor growth organically.
Slide 9: Financials₹144 Cr seed. 24-month runway with 52.25% buffer. Break-even Month 22–24. Year 5 base: ₹3,000–5,000 Cr ARR.
Slide 10: The TeamMaurya: built backbone systems on EuroHPC scale before raising money. Krishnaswamy: CA + MCom, economic spine. CPO: product architect. 100 FTEs by Month 17.
CH 26Close Mechanicspp. 856–870
26.1 Term Sheet Navigation
Accept Quickly1× non-participating liquidation preference · pro-rata rights · standard information rights · SAFE if it simplifies close
Negotiate FirmlyBoard composition — no more than 1 investor seat on 3-person board · anti-dilution — broad-based weighted average only · ₹600 Cr post-money floor · ESOP expansion before investor dilution at Series A
Walk Away FromFull ratchet anti-dilution · super-majority approval rights for operational decisions · founder drag-along without investor majority
FOMO EngineeringMaintain 3+ warm backup investors throughout raise · communicate term sheet progress to all engaged investors simultaneously · firm close date · every meeting includes competitive tension statement
Timeline Target8–12 weeks from first institutional meeting to wire
VOL. X
Product-Market Fit Evidence
PMF signals · early traction · iteration log · learning documentation
Ch. 28–30 · Pages 871–910PMF+
CH 28PMF Signal Architecturepp. 871–893
28.1 PMF Framework — Month 12 Targets
Sean Ellis PMF Score Target70%+ of active users would be 'very disappointed' if Clark's expert routing disappeared. The specific framing: 'If Clark disappeared and you had to go back to GPT-5.4, how would you feel?' Disappointment rate is the primary signal.
Retention Curve TargetWeek-8 retention above 60% for active users. Flattens above 40% = genuine retention. Above 60% = habit-forming product.
Activation Rate Target70%+ of new signups reach the first value moment (first expert-routed query producing an output demonstrably better than a general model) within their first session.
Time-to-First-Value TargetUnder 5 minutes from account creation to first expert-routed query completing.
Organic Referral Rate Target30%+ of new users arriving through word-of-mouth or referral from existing users. Expert contributor network should be the largest single referral source.
CH 29Current Traction — April 2026pp. 894–905
29.1 Current Product State
Backbone Training Status256×H100 GPU cluster operational at IITM Research Park via CoreWeave. Training data pipeline processing 100B+ tokens. 1B baseline model trained and validated. 7B backbone training begins Month 3.
Expert Model DevelopmentFirst 20 internal expert models under development concurrently with backbone training. Domains: Indian constitutional law, IFRS accounting, differential calculus, organic chemistry, Tamil NLP, Hindi NLP, GST compliance, clinical trial methodology, Python code architecture, structural engineering.
Strongest Proof PointBackbone systems built and tested on EuroHPC-scale infrastructure before raising external capital. Technical credibility that precedes the fundraise is the proof point that matters most to sophisticated technical investors.
Biggest Open QuestionMonth 4 benchmark: does the 70B backbone's expert routing produce materially more reliable, traceable outputs than GPT-5.4 and Gemini 2.5 Pro on the 500-task standardised test battery?
CH 30Iteration Logpp. 906–910
30.1 How the Thesis Evolved
Original Thesis (2022)Build a more reliable LLM through better training data and RLHF. Standard approach. Standard outcome.
First Major PivotAfter 18 months: the reliability problem is architectural, not parametric. Making a dense model larger does not make it more reliable for expert-level tasks.
Second Major RefinementThe solution is not a smarter router on top of existing models. It is a dedicated backbone trained specifically for routing, with a detached registry of specialist experts. The circuit-and-lightbulb architecture.
Third Major RefinementThe backbone must be trained before the expert registry can be built. The circuit must be wired before the bulbs can be attached. This set the development sequence.
VOL. XI
Regulatory & Compliance Architecture
Global AI regulation · DPDP Act · EU AI Act · India sovereignty · ethics governance
Ch. 31–33 · Pages 911–960Legal+
CH 31Global AI Regulatory Landscapepp. 911–935
31.1 Clark's Regulatory Position — Compliant by Architecture

Every major AI regulation enacted or proposed globally — India's DPDP Act, the EU AI Act, the US AI Executive Orders, and emerging ISO and NIST standards — converges on three requirements: traceability (outputs must be explainable and auditable), data sovereignty (personal data must be processed in compliant jurisdictions), and human oversight (high-stakes decisions must have audit trails enabling human review). Clark's architecture satisfies all three structurally, because these properties were designed as core architectural features rather than compliance afterthoughts.

The circuit-and-lightbulb architecture is uniquely suited to regulatory compliance. Every conclusion is tagged to the contributing expert model. Every expert's domain scope is certified and documented. Every reasoning chain is traceable step by step. When a regulator asks 'how did this AI system reach this conclusion?' Clark's answer is not a probabilistic explanation — it is a specific, ordered sequence of expert contributions with documented provenance. No dense monolithic model can provide this.

31.2 Regulatory Reference Table — All Sources Verified
Regulation Jurisdiction Key Requirement Clark's Architecture Response Source
India DPDP Act 2023 India Data localisation · consent management · data fiduciary obligations Federated architecture: Indian user data processed on Indian infrastructure. Indian jurisdiction experts run in India. DPDP compliance is structural. MeitY PDF ↗
EU AI Act 2024/1689 EU Transparency · traceability · human oversight for high-risk AI Expert attribution on all outputs. Every conclusion tagged to contributing expert model with confidence level. Satisfies Article 13 structurally. EUR-Lex ↗
DPIIT Startup India India Tax exemptions for recognised startups Section 80IAC and 56(2)(VIIB) applied at Month 0. Recognition certificate obtained before seed close. Startup India ↗
IndiaAI Mission India 10,000+ GPU compute · Innovation Centre empanelment Clark targets Innovation Centre cohort 2 application at Month 6. PIB Official ↗
CH 32Data Protection & Privacypp. 936–950
32.1 Privacy-by-Design Implementation
Data Classification at IngestionAll incoming data classified: public (no restrictions), pseudonymous (anonymisation required), personal (consent-gated, DPDP rights apply), sensitive personal (additional protections). Classification enforced architecturally, not procedurally.
Expert Model Data IsolationEach expert trained on domain-specific data only. An Indian constitutional law expert trained on legal corpus data has no access to any user's query history. Cross-domain contamination architecturally impossible.
User Data Rights ImplementationRight to access: API endpoint returning all stored user data in machine-readable format. Right to erasure: cascading deletion across all storage layers within 72 hours. Right to portability: JSON-LD export. All three implemented as first-class API endpoints.
DPDP Rules 2025Digital Personal Data Protection Rules notified November 13, 2025. Data Protection Board registration completed at incorporation. Data fiduciary obligations documented in Privacy Notice v1.0.
CH 33AI Ethics & Responsible Deploymentpp. 951–960
33.1 Ethics Framework
Prohibited Use CasesExpert routing systems will not be built or certified for: autonomous weapons targeting, mass surveillance, discriminatory housing or lending, disinformation generation, content targeting at minors, or any use case where expert output directly determines a legal outcome without human review.
Bias Detection ProgrammeEvery expert model evaluated on domain-specific fairness benchmarks before certification. Models demonstrating demographic disparity above 5% blocked from production registry. Quarterly re-evaluation of all production experts.
High-Risk Domain PolicyExpert models for medical diagnosis support, legal advice, and financial regulatory compliance must include explicit uncertainty quantification, and clear recommendations for professional human review where confidence falls below 85%. These are assistants to experts, not replacements.
Red Teaming ProtocolMonthly adversarial testing: attempts to extract training data from experts, route beyond certified scope, generate harmful outputs through multi-expert synthesis. Findings inform backbone routing guardrails and expert certification updates.
◆ Volume XI — Regulatory & Ethics Budget
DPDP Act compliance implementation₹8,00,000
EU AI Act compliance advisory₹6,00,000
AI ethics review board setup₹4,00,000
Regulatory monitoring subscriptions₹5,00,000
Volume Total₹23,00,000
VOL. XII
India Market Deep-Dive
India-specific strategy · IndiaAI Mission · talent ecosystem · city operations · government as customer
Ch. 34–36 · Pages 961–1,010India+
CH 34India Market Strategypp. 961–985
34.1 Why India Is the Optimal Launch Market for This Architecture

India is not simply 'a large market.' It is the optimal calibration environment for Clark's architecture. Cost sensitivity forces genuine efficiency — you cannot hide behind compute brute-force when your customer base has a maximum individual willingness to pay of ₹500–2,000/month. This constraint forces Clark to build the most efficient possible routing architecture, and that efficiency becomes a structural competitive advantage when the platform expands to higher-paying markets where competitors are still paying for inefficiency.

India also provides the linguistic and jurisdictional diversity that makes the detached MoE architecture's breadth claim credible from day one. A system that can handle Tamil legal questions, Gujarati business analysis, Hindi technical writing, and English financial modelling simultaneously — with certified expert depth in each language — is demonstrably more capable than any English-optimised system. India is where the expert registry must prove its multi-lingual depth.

34.2 India Market Data — All Sources Verified
India AI Market 2026 · NASSCOM ↗$15.7B in FY26 · Growing at 35% CAGR · $71B projected by 2030 · Government committed ₹10,372 Cr through IndiaAI Mission
Developer Ecosystem · GitHub Octoverse ↗17M+ developers on GitHub · India overtook US in open-source contributor count in 2025 · 57.5M developers projected by 2030
VC Funding · TechCrunch 2025 ↗$643M AI VC funding in India in 2025 · $11B total startup funding · Investors becoming more selective = Clark's institutional quality dossier is a competitive advantage
Priority Customer SegmentsBFSI (₹8,000–40,000/month API): fraud detection, KYC, regulatory compliance. Legal (₹5,000–25,000/month): law firms, in-house legal, judiciary support. Education (₹300–2,000/month): 250M students, IIT/IIM research, coaching institutes.
First 100 Expert Models — India PriorityIndian constitutional law · Companies Act 2013 · GST compliance · SEBI regulations · RBI guidelines · NEET/JEE subject experts · IPC/CrPC · All 22 scheduled Indic language NLP experts · Indian medical protocols · CBSE/ICSE curriculum domains
CH 35Government & Policypp. 986–1,005
35.1 Government as First-Mover Customer
IndiaAI Mission Empanelment · PIB ↗₹10,372 Cr over 5 years · 10,000 GPU compute units through PPP · Innovation Centre cohort 2 application target: Month 6 · Non-dilutive compute credits + market validation signal
GEM Portal · gem.gov.in ↗Government e-Marketplace: primary procurement channel for all government AI services. Empanelment target: Month 12. Ministry of Education and MeitY Digital India as primary target ministries.
Ministry of Education250M students. National Digital Education Architecture (NDEAR). Clark's education expert registry — mapped to curriculum requirements across 22 languages and 36 state boards — is the natural national adaptive learning infrastructure.
Ministry of Law & Justice30M+ pending cases in Indian courts. Legal research and case analysis support. Clark's Indian law expert registry — constitutional, commercial, criminal, all 25 high court jurisdictions — addresses the core research bottleneck in case preparation.
CH 36India Talent & Operationspp. 1,006–1,010
36.1 Talent Architecture
Primary Talent Source · IITM Research Park ↗IIT Madras proximity provides PhD and MTech student access for research intern positions. Adjacent Chennai IT cluster of 450,000+ workers provides engineering talent pipeline at competitive 2026 benchmarks.
Salary BenchmarksResearch Scientists: ₹2,75,000–3,50,000/month. Lead ML Engineers: ₹2,20,000–2,60,000/month. ML Engineers: ₹1,60,000–2,25,000/month. DevOps: ₹1,30,000–1,80,000/month. All drawn from NASSCOM Chennai cluster data.
Attrition Planning12% annual attrition industry average for Indian AI companies per LinkedIn Talent Insights. Mitigation: ESOP vesting, mission-aligned work, competitive compensation, IITM Research Park environment, research publication opportunities.
◆ Volume XII — India Market Budget
Government relationship development₹8,00,000
GEM portal empanelment legal fees₹5,00,000
Regional partnership development₹6,00,000
Volume Total₹19,00,000
VOL. XIII
Partnership & Ecosystem Architecture
Strategic alliances · developer ecosystem · expert marketplace design · contribution protocol
Ch. 37–38 · Pages 1,011–1,050Ecosystem+
CH 37Strategic Alliance Architecturepp. 1,011–1,030
37.1 Ten Partnerships That 10× Clark's Reach or Capability
1. AI4Bharat · IIT Madras ↗Open-source Indic language models as expert contributions to Clark's registry. Joint research on Indic expert training protocols. IITM proximity makes this a working relationship rather than a formal negotiation.
2. ICAI (Institute of Chartered Accountants)400,000+ CAs. Expert model certification with ICAI technical input. Distribution through ICAI continuing education. Clark becomes the standard AI tool for India's chartered accountancy profession.
3. Bar Council of IndiaIndian law expert registry certified with Bar Council input. Distribution through state bar associations. Potential for Clark to become the standard AI research tool across the Indian legal profession.
4. Sarvam AI · sarvam.ai ↗Complementary architectures: Sarvam handles voice/language infrastructure, Clark handles reasoning orchestration. Integration: Sarvam voice input → Clark expert routing → Sarvam voice output. Combined system covers the full human-AI interaction cycle in 22 Indian languages.
5. Zoho / FreshworksClark's backbone embedded into Zoho CRM and Freshworks workflows. Distribution to 100M+ users without direct sales effort.
6. Government e-Marketplace · gem.gov.in ↗Single empanelment unlocks 700+ government departments as potential customers. Target: Month 12.
7. IIT Network (All 23 IITs)Faculty research expert model contribution programme. PhD intern pipeline. Academic credibility accelerating enterprise trust-building.
8. Jio PlatformsBundling Clark's Growth tier with JioFiber premium plans. 200M+ potential users at near-zero CAC.
9. National Medical CommissionHealthcare expert registry certification. India's 1.4M+ licensed physicians as potential expert contributor base.
10. NASSCOM AI Working GroupIndustry standards body participation. First-mover in defining the expert registry certification standard that becomes the de facto benchmark for the Indian AI market.
CH 38Developer Ecosystempp. 1,031–1,050
38.1 Expert Marketplace Design — The Registry Is Clark's Product
Contributor JourneyDeveloper portal signup → domain expert model upload → automated certification battery → human expert review → staged deployment (alpha → beta → production) → revenue earning begins
Revenue Share70% to contributor · 30% to Clark · Monthly payouts via UPI/NEFT · Revenue dashboard with real-time activation counts and earnings
Marketplace Economics at Target Scale10,000 experts × 100 activations/day × ₹2/activation = ₹20L/day gross · Clark's 30%: ₹6L/day · Annual Clark share: ₹21.6 Crore from marketplace alone · Year 3 target
Developer Portal FeaturesInteractive API documentation · Sandbox with full backbone access for testing · SDK in Python, Node.js, Java, Go · Discord community (target: 10,000 developers by Month 12) · Quarterly DevCon at IITM
Quality StandardNo pay-to-play listing. Every expert model certified before production registration. Quality is non-negotiable — the marketplace's value depends entirely on the circuit routing to bulbs that actually illuminate correctly.
◆ Volume XIII — Ecosystem & Partnership Budget
Partnership development & legal₹12,00,000
Developer portal build and maintenance₹18,00,000
DevCon event (quarterly)₹8,00,000
Expert contributor outreach (500 targets)₹6,00,000
Volume Total₹44,00,000
VOL. XIV
Technical Due Diligence Preparation
Architecture validation · IP ownership · codebase · investor technical review preparation
Ch. 39–40 · Pages 1,051–1,090TDD+
CH 39Architecture Validation Packagepp. 1,051–1,070
39.1 Technical Architecture Summary for Investor Review
LanguagesPython (ML/research pipeline) · Go (inference serving, performance-critical) · Rust (backbone routing engine, microsecond-level routing decisions) · TypeScript (developer portal frontend)
Open-Source LicencesPyTorch (BSD-3) · Hugging Face Transformers (Apache 2.0) · FastAPI (MIT) · LangChain (MIT) · Kubernetes (Apache 2.0) · Qdrant (Apache 2.0). No GPL or AGPL components in any proprietary layer.
IP OwnershipAll IP developed by founders and employees assigned to Clark AI Private Limited via employment agreements with explicit IP assignment clauses. Provisional patent filed Month 3.
Build vs. Buy DecisionsBuilt: backbone routing engine, expert registry service, verification layer, expert certification pipeline. Bought/open-sourced: foundation model base weights, cloud infrastructure, observability (Datadog), CI/CD tooling. Every build decision required a proprietary advantage open-source could not provide.
Known Technical Risks(1) Routing accuracy below target — mitigated by Month 4 go/no-go benchmark. (2) Expert certification throughput at 10,000-expert scale — mitigated by automated pipeline design. (3) Backbone inference latency at P99 — mitigated by quantisation and caching strategies.
CH 40Investor Technical Review Q&App. 1,071–1,090
40.1 Five Technical Questions a Seed Investor Will Ask
Q1: How does the backbone route without knowing expert registry contents in advance?The backbone is trained on problem-decomposition patterns and domain classification, not on the contents of specific expert models. It generates a semantic routing query; the expert index resolves this to specific model endpoints. Backbone and registry interact through a stable interface contract.
Q2: How is expert scope enforced — how does a legal expert not respond to a medical question?Each expert is certified with an out-of-domain refusal test during the certification battery. Models that respond outside their certified scope fail certification. The backbone's routing also constrains query routing to certified domain boundaries.
Q3: What prevents an expert model from hallucinating within its own domain?Domain-specific accuracy benchmarking against ground-truth answers from authoritative sources during certification. Confidence calibration trained explicitly. Outputs below a confidence threshold flagged with explicit uncertainty markers rather than delivered as confident assertions.
Q4: How does the verification layer work between experts?Verification layer receives all expert outputs simultaneously. Checks: (a) internal consistency within each output, (b) logical coherence across expert outputs, (c) absence of contradictions on points where domains overlap. Detected contradictions routed back to backbone for re-synthesis.
Q5: What is the latency budget for a multi-expert query?Target: under 3 seconds for a 5-expert parallel activation. Expert activations run in parallel (not sequential). Each 300M expert generates response in 200–400ms. Verification: 100–200ms. Backbone synthesis: 300–500ms. Total: 600–1,100ms expert layer + 300–500ms synthesis = 900–1,600ms. Within target.
VOL. XV
Brand, Narrative & Communications
Brand architecture · category design · crisis protocols · founder PR · communications playbook
Ch. 41–42 · Pages 1,091–1,130Brand+
CH 41Brand Architecturepp. 1,091–1,110
41.1 Brand Identity

Clark does not try to sound like the future. It tries to become indistinguishable from how serious work gets done. The brand is built around four properties: precision (every word used is the correct word, no superlatives), reliability (the brand voice is as consistent as the system's outputs), depth (never shallow, never trend-chasing), and clarity (complex architecture explained in terms a tenth-grader could follow and a domain expert would not find imprecise).

Name OriginFrom the historical figure of the clerk — the person responsible for structuring knowledge and enabling action within institutions. Quiet infrastructure. Functional excellence. The person who made the organisation actually work.
Category Name'Intelligence Infrastructure Systems' — not AI assistant, not LLM platform. Infrastructure that other systems are built on. The layer beneath, not the interface above.
Brand VoicePrecise. Confident. Never breathless. The brand speaks the way the system works: structured, reliable, clear. One rule: if you would be embarrassed saying it to a domain expert in their field, don't say it.
Visual IdentityClean, structured, light. No gradients that evoke vaporware. Typography-driven design that conveys the primacy of language and structure. The interface disappears; the intelligence remains.
Trademark Strategy'Clark AI', 'Clark Intelligence Infrastructure', and 'Intelligence Infrastructure Systems' trademark filings in India, US, EU, UK. Priority: Month 3 concurrent with patent application.
CH 42Communications & Crisispp. 1,111–1,130
42.1 Crisis Communication Framework
First 4 Hours — Any IncidentAcknowledge (internally within 15 minutes, publicly within 1 hour where legally required) → Contain → Investigate → Communicate (factual, direct, without speculation)
Technical Incident ProtocolCEO is technical spokesperson for all AI system failures. No employee social media statements without CEO approval during active incidents. Status page updated within 5 minutes of incident detection.
Data Incident ProtocolDPDP Act requirement: notify affected data principals within 72 hours of breach detection. Incident response playbook pre-written, legally reviewed, and stored in data room.
Bad News ProtocolBad news communicated to investors within 24 hours of materialising. Never let investors read it elsewhere first. Format: what happened, what we know, what we are doing, what we need. No spin.
Proactive Reputation ManagementRegular technical writing by Maurya on backbone architecture, routing quality, and verification methods. Targets: The Ken, Analytics India Magazine, ACM/IEEE conferences, LinkedIn.
◆ Volume XV — Brand & Communications Budget
PR agency retainer₹24,00,000
Brand identity design₹8,00,000
Trademark filing — India, US, EU, UK₹6,00,000
Content programme — blog, whitepapers, technical writing₹12,00,000
Volume Total₹50,00,000
VOL. XVI
Customer Success & Retention
Onboarding design · health scoring · churn prediction · NRR engineering · expansion architecture
Ch. 43–44 · Pages 1,131–1,170Retention+
CH 43Customer Success Architecturepp. 1,131–1,150
43.1 From Signup to Expert Dependency
Time-to-First-Value TargetUnder 5 minutes from account creation to first expert-routed query completing with a demonstrably better output than a general model would provide.
Activation Flow1. Account creation. 2. Domain selection — 'What is your primary work area?' → routes to 5 recommended experts. 3. Pre-populated example query for selected domain. 4. Expert response with attribution visible. 5. Comparison toggle showing same query through general backbone only. 'Wow moment' designed at Step 4.
Customer Health ScoreQuery frequency (30%) · expert diversity accessed (25%) · output acceptance rate (25%) · API integration depth (20%). Score below 40 triggers CSM outreach within 24 hours.
Churn Prediction — Five 90-Day Signals(1) Reduced query frequency → (2) reversion to general backbone → (3) expert scope narrowing → (4) API call decline → (5) support ticket increase. First signal detected → 72-hour CSM contact.
CH 44Expansion Revenue Enginepp. 1,151–1,170
44.1 NRR Architecture — Path to 130%+
NRR MechanicsNRR = (Starting ARR + Expansion − Contraction − Churn) ÷ Starting ARR. Target: 130%+ in Year 3. Primary driver: expert registry growth means users naturally access more experts over time, increasing API usage organically without upsell effort.
Upsell TriggersUsage approaching tier limit · New expert category added in user's adjacent domain · Enterprise team size growth detected · Custom expert model development inquiry (highest expansion revenue item)
Best Expansion Revenue SignalAn enterprise dissatisfied with existing expert quality is the most motivated custom expert development buyer. Dissatisfaction converts into a premium revenue event.
◆ Volume XVI — Customer Success Budget
CS team (6 CSMs + Manager) — 24-month payroll₹1,20,50,000
Success platform tooling (Gainsight or equivalent)₹18,00,000
Training materials & knowledge base₹8,00,000
NPS and CSAT tooling₹6,00,000
Volume Total₹1,52,50,000
VOL. XVII
ESG & Responsible AI Framework
Environmental sustainability · social impact · AI safety · ethics governance · carbon footprint
Ch. 45–47 · Pages 1,171–1,210ESG+
CH 45Environmental Sustainabilitypp. 1,171–1,185
45.1 Carbon Footprint and Mitigation
Training Phase Footprint256 × H100 × 700W TDP × 6 months ≈ 130 tonnes CO₂ equivalent. Offset through verified carbon credits (Gold Standard or Verra VCS) purchased concurrent with training initiation.
Inference Phase Efficiency72-GPU inference fleet is 72% smaller than training fleet. Clark's routing architecture ensures only 3–12 experts activate per query vs activating entire model in a dense system. Per-query energy use is a fraction of equivalent monolithic inference.
Green Compute TargetYear 2: migrate inference workloads to renewable-energy-powered colocation. Tamil Nadu has 13GW+ renewable capacity. Carbon-neutral operations target: Year 3.
Hardware LifecycleNo owned GPU hardware — rented OpEx, zero disposal liability at fleet end. Server hardware at end of life: WEEE-compliant disposal through certified recycler.
CH 46Social Impactpp. 1,186–1,198
46.1 Access, Equity and Language Preservation
Student Access PolicyFree tier permanently maintained for students in government schools and undergraduate institutions. Partnership with DIKSHA (Ministry of Education's digital platform) for classroom deployment at zero cost to government schools.
Expert Contributor Economic InclusionThe expert marketplace creates a new income stream for domain experts who previously had no mechanism to monetise specialised knowledge at scale. A retired judge contributing an Indian criminal procedure expert earns passive income from every law student who routes queries to their model.
Indic Language CommitmentCertified expert models in all 22 scheduled Indian languages, including lower-resource languages: Santali, Bodo, Dogri, Maithili. Not because the market demands it — because the infrastructure should serve everyone.
CH 47AI Safety & Ethics Governancepp. 1,199–1,210
47.1 Safety Framework
Red Teaming ProgrammeMonthly adversarial testing: prompt injection attacks on expert routing, attempts to extract training data from expert models, multi-expert synthesis attacks targeting harmful output generation. Findings update backbone routing guardrails and expert certification requirements.
Ethics Review Board5 members at Series A: 2 Clark executives + 1 external AI ethics researcher + 1 rotating domain expert + 1 legal/regulatory expert. Authority: can halt deployment of any expert model or backbone update pending ethical review.
High-Risk Domain PolicyMedical diagnosis support, legal advice, and financial regulatory compliance expert models must include explicit uncertainty quantification and professional human review recommendations where confidence is below 85%. These are assistants to experts, not replacements for experts.
◆ Volume XVII — ESG Budget
Carbon offset programme — training phase₹8,00,000
ESG reporting & measurement tooling₹4,00,000
Ethics review board compensation & operations₹6,00,000
Indic language low-resource expert development₹12,00,000
Volume Total₹30,00,000
VOL. XVIII
Competitive Intelligence Operations
Monitoring systems · war gaming · scenario planning · pre-mortem analysis
Ch. 48–49 · Pages 1,211–1,250Intel+
CH 48Intelligence Gathering Systemspp. 1,211–1,228
48.1 Continuous Competitive Monitoring
Patent MonitoringMonthly review of AI patent filings from OpenAI, Google, Anthropic, Meta, Mistral, and Indian AI companies. Specific watch: patent claims touching expert routing, multi-model orchestration, or detached MoE architectures. Freedom-to-operate analysis refreshed quarterly.
Talent Signal MonitoringLinkedIn alerts for key technical hires at competitors signalling strategic pivots. 10 Google ML infrastructure hires specialising in multi-model serving is a competitive signal. A hire of 'mixture of experts routing' specialists is a red flag requiring immediate strategic response.
Pricing IntelligenceQuarterly review of all competitor pricing pages (all sources verified and linked throughout this document). Price changes at OpenAI or Google trigger immediate value-proposition impact analysis.
Expert Marketplace IntelligenceMonitoring for any competitor attempting to build a similar expert registry. Primary signal: job postings for 'expert model curation' or 'domain-specific fine-tuning programme' roles at AI companies.
CH 49War Gaming & Scenario Planningpp. 1,229–1,250
49.1 Six Pre-Played Scenarios
SCENARIO 1
OpenAI Announces Expert Routing Product
Examine the architecture. If it is internal MoE (routing between parameter blocks within one model), it is not detached MoE. Communicate the architectural distinction. Accelerate expert registry depth.
SCENARIO 2
Google Acquires a Competitor
Deepen India-sovereign positioning — Google's regulatory exposure in India creates structural advantage for Indian-first infrastructure. Accelerate federated contributor network.
SCENARIO 3
Major Data Breach at Clark
First 4-hour protocol activated. DPDP Act notification within 72 hours. D&O and cyber insurance claims initiated. Technical post-mortem published within 30 days. Zero tolerance for delayed disclosure.
SCENARIO 4
Expert Marketplace Quality Failure
Immediate routing suspension for the failing expert. Root cause analysis. Certification battery updated. Customer communication with specific affected queries identified and recomputed.
SCENARIO 5
Key Investor Withdraws Mid-Round
Bridge protocol: founders inject personal capital for 60 days. CFO activates backup investor pipeline. FOMO communication to all engaged investors.
PRE-MORTEM
If Clark Fails in Year 3
Most likely cause: Month 4 benchmark validated the keystone assumption at minimum viable threshold, but expert routing never achieved the 90%+ accuracy required for genuine expert-level reliability. Users experience Clark as 'better than GPT-5.4 but not trustworthy for critical work.' Expert marketplace never activates.
VOL. XIX
Board, Governance & Investor Relations
Board architecture · governance documentation · ongoing IR cadence · succession planning
Ch. 50–52 · Pages 1,251–1,290Governance+
CH 50Board Architecturepp. 1,251–1,268
50.1 Board Composition and Governance
Seed Board Composition3 members: CEO (Maurya) · Lead Investor (1 seat) · Independent Director (AI domain expert with deep technical credibility)
Reserved Matters — Board Approval RequiredNew funding rounds · Acquisitions above ₹1 Crore · C-suite hires and terminations · IP licensing agreements · Annual budget approval · Related-party transactions
Board CadenceMonthly calls (60 minutes) · Quarterly full meetings (half-day, IITM Research Park) · Annual strategy session (full day). Board pack sent 5 business days in advance.
Committee StructureTechnical Advisory Committee (CTO + Independent Director + 2 external AI researchers): backbone training progress and expert certification standards. Audit & Compensation Committee (CFO + Lead Investor + Independent Director): financial oversight.
Investor Update — Monthly (1 page)Revenue and ARR · Burn and runway · Headcount · Expert registry milestone · Three key wins · Three key risks · One upcoming decision requiring input
Bad News ProtocolBad news communicated to investors within 24 hours. Never let investors read it elsewhere first. Format: what happened, what we know, what we are doing, what we need.
CH 51Governance Documentationpp. 1,269–1,280
51.1 Key Documents
Articles of AssociationTailored for deep tech startup. IP protection, expert marketplace governance, international expansion authorisation.
Shareholders Agreement (SHA)Drag-along and tag-along · Anti-dilution (broad-based weighted average) · Information rights for investors holding > 5% · Consent rights for transactions above ₹1 Crore
ESOP PlanSEBI-compliant · 4-year vesting · 1-year cliff · Exercise price at last round valuation · 7.6% post-seed expanding to 12% at Series A
IP Assignment AgreementsAll founders, employees, and contractors sign IP assignment before first day. No legacy IP owned by individuals. All architecture, models, code, and documentation owned by Clark AI Private Limited.
CH 52Ongoing Investor Relationspp. 1,281–1,290
52.1 Follow-On Investment Nurturing
Seed → Series ASeed investors who maintain thesis alignment are the highest-probability Series A participants. Monthly updates are the primary nurturing mechanism. 'Series A preview' briefing at Month 14 giving seed investors first right of refusal on pro-rata.
Series A Readiness Checklist70B backbone deployed · ≥18 enterprise customers · ARR ≥ ₹50 Cr · expert registry ≥ 2,000 models · SOC 2 Type II received · Data room current · International expansion plans documented
◆ Volume XIX — Governance Budget
Board operations & legal documentation₹10,00,000
ESOP administration platform₹6,00,000
Independent director compensation (2 years)₹24,00,000
Investor relations tooling₹4,00,000
Volume Total₹44,00,000
VOL. XX
Exit Strategy, Legacy & The 50-Year Vision
Acquisition playbook · IPO pathway · category ownership · generational mission
Ch. 53–54 · Pages 1,291–1,330Exit+
CH 53Exit Strategypp. 1,291–1,310
53.1 Likely Acquirers — Strategic Rationale
Microsoft / AzureIntelligence infrastructure acquisition completes Azure's enterprise AI stack. Expert marketplace aligns with Microsoft's enterprise software distribution across Office 365 and Dynamics. Acquisition premium: 15–25× ARR.
Google / AlphabetClark's expert routing would be complementary to Google Search rather than competitive — search finds the information, Clark's experts reason about it. Resolves the search conflict through ownership rather than competition.
Reliance JioNational AI infrastructure play. India-sovereign intelligence infrastructure aligned with Jio's national scale ambitions. Clark's expert models distributed through JioFiber to 200M+ users.
Infosys / TCSEnterprise AI managed services. Clark's expert marketplace powers the AI consulting layer for India's two largest IT services companies, deployed to their global enterprise client base.
Strategic Cultivation TimelineBegin cultivating relationships with 5+ potential acquirers 3–4 years before any exit event. Ensure Clark appears in all strategic planning conversations for AI infrastructure at each potential acquirer.
53.2 IPO Pathway
NSE Main Board · NSE ↗Minimum paid-up equity ₹10 Crore · Minimum market cap ₹25 Crore · 3-year operating track record · Positive net worth. Clark targets NSE main board listing at Year 7–8.
IPO Trigger ConditionsARR ≥ ₹500 Crore · Expert marketplace contribution > 40% of revenue · NRR > 130% for 4 consecutive quarters · 3+ years of audited financials
IPO Valuation FrameworkAI infrastructure at IPO: 10–20× ARR. At ₹3,000 Crore ARR (base case Year 5): implied pre-IPO valuation ₹30,000–60,000 Crore. Seed investors at 24% (pre-dilution): ₹7,200–14,400 Crore. Return on ₹144 Crore seed investment: 50–100× cash-on-cash.
CH 54Legacy Architecture — The 50-Year Visionpp. 1,311–1,330
54.1 Category Ownership

Clark's ambition is not to be the best AI company. It is to define what the category of Intelligence Infrastructure Systems means — to set the standards by which all future systems are measured, to establish the expert model certification protocols that the entire industry adopts, and to create the open ecosystem through which any qualified entity can contribute expertise to the global intelligence layer.

The expert certification standard that Clark develops internally will, within five years, be proposed as an industry standard through NASSCOM, ISO, and IEEE. Just as TCP/IP is the standard through which all internet traffic flows, Clark's expert model interface specification will be the standard through which all deployed intelligence is routed. This is category ownership at its deepest.

54.2 The Generational Mission

In fifty years, Clark is not a company. It is the infrastructure. The circuit is wired into every institution, every profession, every domain of human knowledge. No one asks which intelligence infrastructure they use — the way no one asks which electrical infrastructure powers their building. Clark is the circuit through which the entire accumulated expertise of human civilisation is made accessible to every person on Earth, in their language, in their context, at the cost of electricity.

54.3 Succession Architecture
Distributed Ownership DesignCritical technical and operational knowledge is documented, distributed, and independent of any single individual from day one. The circuit must function even if any lightbulb is removed — including the founding team.
Governance ContinuitySeed: 3 board members. Series A: 5. Series B+: 7 (add audit chair). IPO: 9 with majority independent. At every stage, governance structure reduces key-person dependency.
Mission InstitutionalisationClark's commitment to Indic language coverage, open expert marketplace, and free access for students is written into the Articles of Association — not dependent on founder presence but embedded in corporate structure.
30 Key Milestones — 24-Month Programme
Investor and board checkpoint system. Hit = on track. Miss = explain and adjust within 48 hours.
Mo.0
Apr 2026
LEGAL
M01: Company incorporated · DPIIT Startup India recognition · IITM Research Park incubation onboarded
MCA Registration No. · DPIIT certificate · office keys
Mo.0
Apr 2026
INFRA
M02: 256×H100 GPU cluster live at CoreWeave · backbone training begins · data pipeline active
GPU cluster billing confirmed · training loss tracking active
Mo.1
May 2026
RESEARCH
M03: Data pipeline live · 100B tokens collected and cleaned · tokeniser trained
100B tokens confirmed · BPE 100K vocab tokeniser published internally
Mo.2
Jun 2026
RESEARCH
M04: 1B backbone baseline trained · first 20 expert models in development
Perplexity < 15 · expert development pipeline operational
Mo.3
Jul 2026
RESEARCH
M05: 7B backbone training begins · provisional patent filed
Training loss < 2.0 · patent application number received
Mo.4
Aug 2026
PRODUCT
M06: 7B backbone benchmark — MMLU >60% · internal API alpha with 10 testers · 10 experts certified
Both benchmarks passed · first 10 experts in alpha registry
Mo.5
Sep 2026
BUSINESS
M07: First enterprise LOI signed · 30B backbone begins
1 signed LOI · 30B training job launched and stable
Mo.6
Oct 2026
INFRA
M08: Scale to 72 inference GPUs · 70B backbone training begins · expert registry at 100 models
72 GPUs active · 100 experts certified and live
Mo.7
Nov 2026
BUSINESS
M09: 🎯 BETA LAUNCH — First paying customer · public API beta live
₹2L first-month revenue · 100 beta users · expert routing in production
Mo.9
Jan 2027
BUSINESS
M10: 3 enterprise customers · ₹11L MRR · SOC 2 Type I audit initiated
3 signed contracts · SOC 2 firm engaged · 300+ experts registered
Mo.10
Feb 2027
PRODUCT
M11: Expert marketplace open to third-party contributors · developer portal live
First external contributor registered · Discord at 500+ members
Mo.12
Apr 2027
FINANCE
M12: ₹80L MRR · 18 customers · Series A data room live · SOC 2 Type I received
₹80L MRR confirmed · certificate issued · 500+ experts in registry
Mo.14
Jun 2027
BUSINESS
M13: Expert marketplace contributors generating real revenue · 200+ registered experts
Contributor payouts initiated · 10+ external contributors earning
Mo.16
Aug 2027
PRODUCT
M14: Expert marketplace full launch · 3rd-party revenue > 10% of total
Marketplace live · first ₹5L+ in marketplace revenue
Mo.18
Oct 2027
FINANCE
M15: Series A raise initiated · EBITDA approaching break-even · 2,000+ experts
First term sheet received · EBITDA within ₹50L of zero
Mo.20
Dec 2027
BUSINESS
M16: US market entry — first US enterprise customer · ₹3 Cr+ MRR
US customer signed · ₹3 Cr MRR confirmed
Mo.22
Feb 2028
FINANCE
M17: 🎯 FCF BREAK-EVEN ACHIEVED · ISO 27001 initiated
FCF positive · ISO 27001 gap analysis complete
Mo.24
Apr 2028
FINANCE
M18: ₹6 Cr/month revenue target · Series A closed · 5,000+ experts registered
₹6 Cr MRR · Series A wire confirmed · 5,000 experts certified
Complete Financial Architecture · All 20 Volumes · 24 Months · Every Rupee Accounted For
Master Budget
₹144 Crore Seed Capital — Hierarchically Deployed Across All Operational Categories
Cat.CategoryDescription24M Total ₹% of Seed
GPUGPU Rental — Training Phase256 × H100 SXM5 × ₹150/hr × 720hr/mo × 6 months · CoreWeave · backbone training₹16,58,88,00011.52%
GPUGPU Rental — Inference Phase72 × H100 SXM5 × ₹150/hr × 720hr/mo × 18 months · expert serving + inference₹13,99,68,0009.72%
HREmployee Salaries3 founders + 97 employees · AI Research = 30.3% of payroll · Chennai 2026 benchmarks₹26,63,70,00018.50%
HREPF + Gratuity (Statutory)Employer PF 12% + gratuity · mandatory EPFO compliance₹2,23,85,3111.55%
HWMacBook Laptops (M5 Air PRO/STD)PRO ₹1,49,900 · STD ₹1,19,900 · purchased on hire date · 100 units over 24 months₹1,36,39,9000.95%
HWServers & Storage HardwareAPI servers + storage nodes · one-time CapEx · fully depreciated Year 1₹79,15,0000.55%
OPSIncubation / Office / UtilitiesIITM Research Park + internet + electricity + facilities₹2,35,85,0001.64%
OPSSecurity / DevOps / SOC 2Datadog · SOC2 Type I/II · CI/CD · GitHub Actions · security tooling₹2,57,44,9621.79%
OPSSoftware LicensesGitHub · Jira · Slack · W&B · Google Workspace · Notion · monitoring₹1,21,60,9430.84%
DATADataset LicensingTraining datasets · HuggingFace + proprietary · upfront + periodic renewal₹1,00,00,0000.69%
MKTGo-To-Market (Sales + Marketing)Performance marketing · content · sales team · CRM · events · PR · expert contributor outreach₹3,32,00,0002.31%
LEGLegal, IP & RiskPatent portfolio (15 filings) · legal retainer · insurance · regulatory advisory₹99,00,0000.69%
RESMarket Research & IntelligenceCustomer discovery · analyst reports · competitive intelligence · pricing research₹1,24,00,0000.86%
CSCustomer Success6 CSMs + Manager · success platform · training materials · NPS tooling₹1,52,50,0001.06%
FINFinance OperationsStatutory audit · tax advisory · FP&A tooling · data room setup₹64,00,0000.44%
GOVGovernance & BoardBoard operations · legal docs · ESOP administration · investor relations₹44,00,0000.31%
PARTPartnership & EcosystemDeveloper portal · DevCon events · expert contributor outreach · strategic partnerships₹44,00,0000.31%
TOTAL PLANNED OPERATIONAL SPEND₹68,76,57,11647.75%
STRATEGIC BUFFER RESERVE — Emergency · Scale-up · Unforeseen opportunitiesUnallocated · structural protection against GPU price increases, timeline slippage, or strategic opportunities₹75,23,42,88452.25%
TOTAL SEED CAPITAL ACCOUNTED₹1,44,00,00,000100.0%
All figures are 24-month operational projections based on the 100-tab Excel financial model (attached separately as Clark_AI_Master_Budget.xlsx). GPU pricing source: CoreWeave Pricing ↗ — H100 SXM5 $4.76/hr · ₹150/hr at ₹84/USD (RBI spot rate assumption). Salary benchmarks: NASSCOM Chennai IT Cluster 2026 · EPF rate: 12% employer contribution per EPFO regulations. India AI market: NASSCOM FY26 ↗. IndiaAI Mission: PIB Official ↗. DPDP Act: MeitY PDF ↗. EU AI Act: EUR-Lex ↗.
End of Document

Clark AI April 2026 · Seed Stage · ₹144 Crore · Hierarchically Detached Federated Mixture-of-Experts