Clark AI is building the Hierarchically Detached Federated Mixture-of-Experts intelligence infrastructure system. A 70-billion-parameter backbone — the circuit — to which 10,000 specialist expert models of 100M–500M parameters each — the lightbulbs — are attached. Total system parameter coverage: 5 trillion+. India first. Then the world.
Clark began with a simple but profound observation. The “council of experts,” popularized by Perplexity AI, proved that multiple models collaborating could outperform a single system. Intelligence scaled with diversity.
But Clark asked a deeper question: what if intelligence did not emerge from many independent systems, but from one unified intelligence capable of internally orchestrating expertise?
Instead of stitching models together, Clark built a single backbone that coordinates thousands of internal experts— each specialized, each context-aware, all operating within one cohesive system.
This is the shift: from external collaboration → to internal specialization.
| Dimension | Traditional “Council of Experts” | Clark Architecture |
|---|---|---|
| System Design | Multiple independent models collaborating | Single unified backbone with internal experts |
| Coordination | External orchestration between systems | Native routing inside one intelligence |
| Latency | High (cross-model communication overhead) | Low (intra-system routing) |
| Context Retention | Fragmented across models | Shared global context |
| Scalability | Complex integration overhead | Add experts without rewriting system |
| Core Philosophy | Many minds working together | One mind containing many |
| Capability | Clark Advantage | Impact |
|---|---|---|
| Deep Specialization | 10,000 domain experts coordinated by backbone | Near-human expert-level precision per domain |
| Efficient Inference | Only 3–12 experts activated per query | Massive capability at fraction of compute cost |
| Composable Intelligence | Experts dynamically combined per problem | Solves multi-domain problems natively |
| Federated Growth | External entities can plug in experts | Exponential ecosystem expansion |
| System Evolution | Add experts, not parameters to backbone | Scales without retraining entire model |
| Paradigm | From static models → to living intelligence infrastructure | |
This is not an improvement. It is a redefinition. Not many systems cooperating — but one system that contains all expertise within itself.
If an individual expert fails, the cost of rectification is limited to retraining that specific expert rather than the entire network. This stands in contrast to the current paradigm, where updates often require retraining the full model. As a result, recurring computational expenses are significantly reduced . Instead of relying on hundreds or thousands of GPUs, the system can be maintained and updated using only tens of GPUs, enabled by our detached federated topology.
Understanding Clark's Hierarchically Detached Federated Mixture-of-Experts Architecture
The backbone does not answer questions. It is the electrical circuit — the copper wire running through the building(Clark Network). It carries current(Decomposed Information), distributes intelligence, and is the infrastructure through which every lightbulb can function. On its own it produces no light. With 10,000 experts attached, it illuminates everything.
Three functions only: (1) Decompose — understand the deep structure of a problem, its logical dependencies, its domain category. (2) Route — decide which experts to activate and in what sequence. (3) Synthesise — receive expert outputs and compose one coherent, verified, traceable response.
Each expert is a lightbulb — engineered to illuminate one specific domain with extraordinary precision. They are completely detached from one another. A cardiac surgery expert has no awareness of the derivatives trading expert two sockets away. They are architecturally isolated — only connected to the backbone circuit.
The "federated" dimension: bulbs can be trained by different entities — IIT Madras trains a constitutional law expert, a pharma company trains a drug-interaction expert, Clark trains a mathematics expert — and all three plug into the same circuit. The circuit does not care who made the bulb.
| Month | Date | Phase | Revenue | Monthly Burn | Net Cash Flow | Event |
|---|---|---|---|---|---|---|
| Mo.0 | Apr 2026 | Training | ₹0 | ₹4.16 Cr | (₹4.16 Cr) | Circuit wiring begins · 256×H100 live · 3 founders |
| Mo.1 | May 2026 | Training | ₹0 | ₹3.32 Cr | (₹3.32 Cr) | 100B token data pipeline · tokeniser trained |
| Mo.2 | Jun 2026 | Training | ₹0 | ₹3.23 Cr | (₹3.23 Cr) | 1B backbone baseline · first expert models in dev |
| Mo.3 | Jul 2026 | Training | ₹0 | ₹3.53 Cr | (₹3.53 Cr) | 7B backbone begins · provisional patent filed |
| Mo.4 | Aug 2026 | Training | ₹0 | ₹3.52 Cr | (₹3.52 Cr) | 7B MMLU >60% · first 10 experts certified |
| Mo.5 | Sep 2026 | Training | ₹0 | ₹3.61 Cr | (₹3.61 Cr) | 30B backbone · 100 experts in registry |
| Mo.6 | Oct 2026 | Inference | ₹0 | ₹1.78 Cr | (₹1.78 Cr) | Scale to 72 GPUs · 70B live · expert routing active |
| Mo.7 | Nov 2026 | Beta | ₹2.0 L | ₹1.81 Cr | (₹1.79 Cr) | 🎯 FIRST PAYING CUSTOMER · beta API live |
| Mo.8 | Dec 2026 | Beta | ₹5.0 L | ₹1.90 Cr | (₹1.85 Cr) | Beta growing · 1st enterprise contract |
| Mo.9 | Jan 2027 | Beta | ₹11.0 L | ₹2.21 Cr | (₹2.10 Cr) | 3 enterprise customers · SOC 2 audit starts · 300+ experts |
| Mo.10 | Feb 2027 | Growth | ₹21.0 L | ₹2.20 Cr | (₹1.99 Cr) | Public API · expert marketplace opens to 3rd-party contributors |
| Mo.11 | Mar 2027 | Growth | ₹45.0 L | ₹2.38 Cr | (₹1.93 Cr) | 12 customers · external contributor payouts begin |
| Mo.12 | Apr 2027 | Growth | ₹80.0 L | ₹2.68 Cr | (₹1.88 Cr) | 18 customers · ARR ₹9.6 Cr · Series A data room live · SOC 2 Type I |
| Mo.13 | May 2027 | Growth | ₹1.15 Cr | ₹2.74 Cr | (₹1.59 Cr) | 25 customers · 500+ expert models registered |
| Mo.14 | Jun 2027 | Growth | ₹1.50 Cr | ₹2.88 Cr | (₹1.38 Cr) | 32 customers · US market entry planning |
| Mo.15 | Jul 2027 | Growth | ₹1.95 Cr | ₹3.13 Cr | (₹1.18 Cr) | 40 customers · 1,000+ experts in registry |
| Mo.16 | Aug 2027 | Scaling | ₹2.40 Cr | ₹2.95 Cr | (₹0.55 Cr) | Series A preparation · marketplace revenue material |
| Mo.17 | Sep 2027 | Scaling | ₹2.85 Cr | ₹2.84 Cr | +₹0.01 Cr | Series A initiated · near EBITDA break-even |
| Mo.18 | Oct 2027 | Scaling | ₹3.20 Cr | ₹2.75 Cr | +₹0.45 Cr | EBITDA positive · 2,000+ experts · US first customer |
| Mo.19 | Nov 2027 | Scaling | ₹3.55 Cr | ₹2.75 Cr | +₹0.80 Cr | Profitable months sustained |
| Mo.20 | Dec 2027 | Scaling | ₹4.00 Cr | ₹2.76 Cr | +₹1.24 Cr | Global expansion active · 3,000+ experts |
| Mo.21 | Jan 2028 | Scaling | ₹4.45 Cr | ₹2.76 Cr | +₹1.69 Cr | Series A closes · ISO 27001 initiated |
| Mo.22 | Feb 2028 | Scaling | ₹5.00 Cr | ₹2.76 Cr | +₹2.24 Cr | 🎯 FCF BREAK-EVEN ACHIEVED · 4,000+ experts |
| Mo.23 | Mar 2028 | Scaling | ₹6.00 Cr | ₹2.77 Cr | +₹3.23 Cr | Month 24 target met · 5,000+ experts registered |
| # | Category | 24M Total | % of Seed | Month 0 | Month 12 | Source |
|---|---|---|---|---|---|---|
| 1 | GPU Rental — Training (256×H100 × 6 Mo.) | ₹16,58,88,000 | 11.52% | ₹2,76,48,000 | ₹0 | CoreWeave ↗ · 256×₹1,08,000/mo×6 |
| 2 | GPU Rental — Inference (72×H100 × 18 Mo.) | ₹13,99,68,000 | 9.72% | ₹0 | ₹77,76,000 | 72 GPUs from Mo.6 onward · inference + expert serving |
| 3 | Employee Salaries & Benefits | ₹26,63,70,000 | 18.50% | ₹16,50,000 | ₹1,33,20,000 | 3 founders → 100 FTEs Mo.17 · Chennai 2026 benchmarks |
| 4 | MacBook Laptops (M5 Air PRO/STD) | ₹1,36,39,900 | 0.95% | ₹5,99,600 | ₹16,78,700 | PRO ₹1,49,900 · STD ₹1,19,900 · on hire date |
| 5 | Servers & Storage Hardware | ₹79,15,000 | 0.55% | ₹51,55,000 | ₹0 | API servers + storage nodes · one-time CapEx |
| 6 | Incubation / Office / Utilities | ₹2,35,85,000 | 1.64% | ₹4,85,500 | ₹10,36,500 | IITM Research Park ↗ + internet + facilities |
| 7 | Security / DevOps / SOC 2 | ₹2,57,44,962 | 1.79% | ₹7,91,666 | ₹12,34,998 | Datadog · SOC2 Type I/II · CI/CD · security tooling |
| 8 | Software Licenses | ₹1,21,60,943 | 0.84% | ₹89,718 | ₹6,07,207 | GitHub · Jira · Slack · W&B · Google Workspace · Notion |
| 9 | Dataset Licensing | ₹1,00,00,000 | 0.69% | ₹50,00,000 | ₹0 | Training datasets · HuggingFace + proprietary corpora |
| 10 | EPF + Gratuity (Statutory) | ₹2,23,85,311 | 1.55% | ₹1,38,663 | ₹11,19,392 | Employer PF 12% + gratuity · mandatory EPFO compliance |
| TOTAL PLANNED OPERATIONAL SPEND | ₹68,76,57,116 | 47.75% | ||||
| ★ | STRATEGIC BUFFER RESERVE — emergency · scale-up · unforeseen | ₹75,23,42,884 | 52.25% | Unallocated — structural protection | ||
| TOTAL SEED CAPITAL ACCOUNTED | ₹1,44,00,00,000 | 100.0% | ||||
| Company | Model | Input $/1M | Output $/1M | Indic? | Clark Advantage | Source |
|---|---|---|---|---|---|---|
| OpenAI San Francisco |
GPT-5.4 | $2.50 | $15.00 | ❌ None | 40% cheaper · 22 Indic languages · expert routing depth vs monolithic | OpenAI Pricing ↗ |
| Anthropic San Francisco |
Claude Sonnet 4.6 | $3.00 | $15.00 | ❌ None | Same price tier · adds Indic · DPDP-compliant by architecture | Anthropic Pricing ↗ |
| Google DeepMind Mountain View |
Gemini 2.5 Pro | $1.25–$2.50 | $10–$15 | ❌ None | No search-revenue conflict · India-sovereign · 10K detached experts vs monolith | Gemini Pricing ↗ |
| Mistral AI Paris, France |
Mistral Large 3 | $2.00 | $6.00 | ❌ None | Expert specialisation depth impossible in single dense model | Mistral Pricing ↗ |
| Krutrim / Ola Bengaluru, India |
Krutrim V2 (12B) | ₹7–17/M | Usage | ✅ 22 langs | 70B backbone + 10K experts vs single 12B dense model — different architecture class | Krutrim Cloud ↗ |
| Sarvam AI Bengaluru, India |
Sarvam 105B | Free | Free | ✅ 22 langs | Voice/translation focus vs Clark's reasoning orchestration — complementary, not competitive | Sarvam Pricing ↗ |
| AI4Bharat IIT Madras, Chennai |
IndicBERT / NLP | Free (OS) | Free | ✅ All 22 | Academic only — no commercial API. Expert model contributor and strategic partner. | AI4Bharat ↗ |
| CLARK AI ★ Chennai · IITM Research Park |
Clark System (70B + 10K experts) | $0.35 target | $1.50 target | ✅ 22+ languages | Hierarchically Detached Federated MoE · 5T+ param coverage · India-sovereign · open expert marketplace | This document |
Clark is an intelligence infrastructure company that builds the world's first Hierarchically Detached Federated Mixture-of-Experts reasoning system — a 70-billion-parameter backbone that routes intelligence to 10,000 specialist expert models (each 100M to 500M parameters), producing structured, verifiable, traceable outputs at a fraction of the cost of equivalent monolithic systems. The backbone is the circuit. The experts are the lightbulbs. The circuit does not produce light. It is the infrastructure through which every specialist illuminates exactly what they were trained to illuminate, precisely when required.
Together, a 70B backbone and 10,000 experts provide the knowledge coverage of a system with over 5 trillion parameters, but at the inference cost of activating only the handful of experts relevant to each specific query. This is the architecture the entire AI industry will eventually converge toward. Clark is building it first, from Chennai, and deploying it to the world.
The AI industry has convinced itself that intelligence scales by making one model larger. This is correct but incomplete. Scaling a dense model improves average performance across all domains simultaneously, which sounds appealing until you consider the economics. To improve a dense model's IFRS accounting performance by 10%, you must increase the entire model's capacity — all the chemistry knowledge, all the history, all the code — by a proportional amount. You are paying the computational cost of a trillion-parameter system to improve one domain.
Clark improves IFRS performance by training a better IFRS expert. The cost is proportional only to the IFRS expert's size — 200M parameters, trained exclusively on accounting literature. The leverage is different by orders of magnitude. And when a user asks a question that spans IFRS accounting, Indian company law, and international transfer pricing simultaneously, Clark activates three experts in parallel. A dense model activates no experts — it activates everything and hopes the correlation patterns in its training data surface something coherent.
The circuit produces no light by itself. Every lightbulb illuminates exactly what it was trained to illuminate. Together they light up a room that no single bulb, however powerful, could illuminate alone.
Clark launches in India because India provides the optimal calibration environment: 1.4 billion potential users, 17 million developers, cost-sensitive economics that demand genuine architectural efficiency rather than compute brute-force, and 22 scheduled languages that require multi-lingual expert coverage from day one. India is where the circuit gets wired correctly under real conditions.
But the architecture is language-agnostic and jurisdiction-agnostic by fundamental design. Training a Tamil legal expert or an English common law expert is the same protocol applied to different corpora — both are 100M–500M parameter models registered to the same backbone. Training a German tax code expert or a Japanese medical terminology expert follows the same path. Global expansion is not a rebuild. It is a registration event. New language, new jurisdiction, new domain: train the expert, certify it, register it. The circuit already exists. The socket is already there. You are adding a bulb.
The global addressable market — ₹39.6 trillion annually across individual users, enterprise, and the API-developer ecosystem — is the total worldwide demand for reliable, structured, verifiable intelligence across every domain, language, and jurisdiction. Clark is building the infrastructure to serve that demand from a base in Chennai that the entire world connects to.
| Advantage 1: Infinite Scalability Without Retraining | Adding a new domain requires training one new expert model and registering it. The backbone is not retrained. The other 9,999 experts are not disrupted. A traditional dense model company adding a new domain must retrain a billion-parameter system at enormous cost. Clark adds a lightbulb. This is the fundamental infrastructure advantage that no competitor can match without switching architectures. |
| Advantage 2: Federated Contribution at Zero Marginal Cost | The expert marketplace allows third parties — universities, enterprises, research institutions — to contribute specialist models. Clark provides the circuit. Contributors provide the bulbs. Revenue share: 70% to the contributor, 30% to Clark. The knowledge base expands at near-zero marginal cost to Clark. This is the marketplace network effect that transforms Clark from a product into a platform. |
| Advantage 3: Genuine Expert-Level Depth Per Domain | A 200M-parameter model trained exclusively on IFRS accounting standards knows IFRS accounting better than a 200B general-purpose model trained on everything. Specialisation is a capability multiplier. Clark's experts are genuinely expert within their defined scope — not approximately expert, not statistically likely to be expert, but certifiably, traceably, verifiably expert. |
The prevailing consensus assumes that artificial intelligence will advance primarily through scale — bigger models, more parameters, more training data. This assumption is deeply embedded in how every major AI lab allocates capital and sets research priorities. It is also the source of a fundamental exploitable error.
Scaling a dense model cannot produce the same depth per domain as a specialised model trained exclusively on that domain. A 70B dense model must distribute its 70 billion parameters across every domain of human knowledge simultaneously. Clark's 200M IFRS expert concentrates all its parameters on one domain. The depth comparison is not even close. And when the user's question spans five domains, Clark activates five specialists simultaneously. The dense model guesses from blurred memory. Clark illuminates from focused expertise.
The entire Clark model rests on one testable assumption: that a 70B backbone trained for orchestration, routing queries to 100M–500M parameter specialist experts, produces outputs that are more reliable, more verifiable, and more economically efficient than a monolithic model of comparable or greater total parameter count. This is not assumed to be true — it is designed to be empirically tested at Month 4.
500 multi-step reasoning tasks across 10 domains: mathematics, legal analysis, financial modelling, medical literature, code architecture, regulatory compliance, scientific analysis, business strategy, engineering design, historical causality. Evaluated on three axes: correctness, traceability (can each reasoning step be audited?), and reliability (same input → same output across 10 runs). Compared against GPT-5.4 and Gemini 2.5 Pro. Go/no-go decision explicitly structured around this benchmark before further capital deployment.
| CEO (Maurya) — Post-Seed | 22.8% · 4-year vest · 1-year cliff · Value at ₹600 Cr post-money: ₹13.68 Crore |
| CFO (Krishnaswamy) — Post-Seed | 22.8% · 4-year vest · 1-year cliff · Value at ₹600 Cr post-money: ₹13.68 Crore |
| CPO — Post-Seed | 22.8% · 4-year vest · 1-year cliff · Value at ₹600 Cr post-money: ₹13.68 Crore |
| ESOP Pool | 7.6% post-seed · 1,000,000 shares · Expanding to 12% at Series A |
| Seed Investors | 24.0% · 3,157,894 new preferred shares · ₹144 Cr at ₹600 Cr post-money |
| Three Non-Negotiable Hire Values | 1. Intellectual honesty — truth before ego. 2. Ownership mentality — your problem until solved. 3. Bias toward depth — surface solutions are not solutions. |
| Founder Salaries | ₹5.5 lakh/month each · well below market for their backgrounds · commitment signal |
Across students, developers, analysts, founders, and enterprise decision-makers, complaints about current AI systems converge with striking consistency. They are not complaining about speed, access, or intelligence. They are complaining about structure and reliability — which are architectural properties, not parametric ones. Verbatim: 'It gives me an answer but I don't trust it.' 'I still have to check everything manually.' 'It breaks on real problems.' 'I spend more time fixing its output than doing the work myself.' 'It sounds completely confident and is completely wrong.' 'I can't use it for anything that actually matters.' 'I need five tools to finish one task and none of them talk to each other.'
The pattern is unambiguous. Users are not asking for a more capable model. They are asking for a more reliable system. You cannot make a monolithic model more reliable by making it larger — you can only make it more capable on average. Clark's circuit-and-lightbulb architecture is designed specifically for reliability, because reliability requires decomposition, specialisation, routing, and verification. None of these are achievable in a single forward pass through a dense model.
| Failure 1: Single-Pass Generation | Dense models generate answers in one forward pass. Complex problems require multi-step reasoning where each step depends on the verified output of the previous. Forcing this into a single pass produces plausible-looking text that is structurally unreliable. Clark's solution: the backbone decomposes and routes. Each expert executes its sub-task with genuine depth. The verification layer checks each step before synthesis. |
| Failure 2: No Domain Specialisation | A 70B dense model trained on everything is not genuinely expert in anything. When asked an advanced question about protein folding kinetics or SEBI derivative regulations, it produces a coherent-sounding approximation drawn from statistical correlations. Clark's solution: a 300M protein folding expert trained exclusively on biophysics literature, whose answer can be traced to specific citations and verified against experimental results. |
| Failure 3: No Orchestration Layer | Current systems have no component responsible for decomposing problems, routing sub-tasks, and synthesising outputs. The user becomes the orchestrator. Clark's solution: the 70B backbone is trained explicitly for orchestration. Routing is its primary and only function. |
| Failure 4: No Verification Before Delivery | Current systems deliver outputs without internally checking them. They have no mechanism for asking 'is this actually correct?' Clark's solution: every expert output passes through the verification layer before synthesis. Contradictions flagged. Claims without grounding marked uncertain. Only verified outputs proceed. |
| Failure 5: Misaligned Incentives | AI systems are optimised for engagement — session duration, response speed, fluency. Fast, confident, plausible responses score well regardless of factual accuracy. Clark's architecture decouples every design decision from engagement optimisation. Every metric is evaluated against correctness and traceability. |
| India Direct Cost (Annual) | ₹50,000+ Crore — verification time, error correction, bad decisions across students and professionals who cannot trust their AI tools |
| Global Direct Cost (Annual) | ₹60–100 Trillion — 400 million knowledge workers × average ₹1.5 lakh annual productivity loss from unreliable AI outputs |
| Second-Order Costs | Business decisions made on incorrect AI analysis · legal penalties from compliance errors · medical errors from AI-assisted misdiagnosis · failed product launches |
| Root Cause | Systems optimised for engagement at the expense of correctness. The incentive structure of the AI industry rewards plausibility over accuracy. |
| GPU Compute Cost Collapse · CoreWeave ↗ | H100 SXM5 at $4.76/hr. Expert model inference costs fell 5–10× in 36 months. Routing 10,000 experts is now economically viable at enterprise pricing. Three years ago it was not. |
| Foundation Model Quality Threshold | The 70B backbone requires general language understanding at a threshold quality before expert routing is reliable. That threshold was crossed in 2023. Below it the backbone cannot reliably decompose problems. Above it the architecture becomes tractable. |
| Orchestration Framework Maturity | Multi-model, multi-step orchestration is production-engineering in 2026. In 2022 it required months of bespoke development. The tooling crossed a threshold that makes Clark buildable by a team of 10. |
| Expert Model Training Cost Collapse | Training a 300M parameter expert model on a specific domain corpus costs ₹2–15 lakh per model in 2026 compute economics. Building 10,000 bulbs is tractable only at current compute prices. |
| Data Sovereignty Regulations · DPDP Act ↗ · EU AI Act ↗ | Both require traceable, auditable outputs and data localisation. Clark's architecture satisfies these requirements structurally. Centralised monolithic models increasingly do not. Regulation is Clark's competitive advantage. |
| Regulation | Jurisdiction | Clark Alignment | Source |
|---|---|---|---|
| IndiaAI Mission · ₹10,372 Cr | India | 10,000+ GPU compute · Innovation Centre empanelment target Month 12 | PIB Official ↗ |
| India DPDP Act 2023 | India | Federated architecture = data localisation by design. Indian user data never leaves India. | MeitY PDF ↗ |
| EU AI Act 2024/1689 | European Union | Traceable expert outputs with reasoning chains satisfy Article 13 transparency requirements architecturally. | EUR-Lex ↗ |
| DPIIT Startup India | India | Section 80IAC tax exemption · Angel Tax exemption · Self-certification for compliance. | Startup India ↗ |
| IITM Research Park · respark.iitm.ac.in ↗ | Chennai | India's first university research park · 255+ incubated companies · AI4Bharat proximity · PhD intern pipeline. | Physical presence |
Clark's total addressable market cannot be read from any existing market research report because no existing report models the convergence of individual intelligence tools, enterprise reasoning platforms, and API infrastructure into a single architecture. Clark is building the infrastructure layer beneath all three simultaneously.
| Layer | Population | Monetisation | Annual TAM ₹ | Structural Driver |
|---|---|---|---|---|
| Individual Users — Global | 800M knowledge workers and students worldwide | ₹700/month ARPU · 15% paid = 120M users | ₹10 Trillion | Reliability drives conversion — users who trust outputs pay; those who don't, won't |
| Enterprise — Global | 40M companies · 25% 5-year adoption | SMB ₹2L/yr · Mid ₹20L/yr · Enterprise ₹3Cr/yr | ₹19.6 Trillion | Expert marketplace delivers domain depth no general LLM can match |
| API & Developer Ecosystem | 50M developers · 20M active builders | ₹2L/year avg via API + marketplace | ₹10 Trillion | Applications built on Clark generate ongoing API revenue without direct sales |
| TOTAL — Global | ₹39.6 Trillion / year |
| Phase 1: India (Launch) | Where the circuit gets wired. 250M students, 60M knowledge workers, 22 languages, cost sensitivity that enforces genuine efficiency. Clark achieves PMF and architectural scale before facing full global competitive pressure. |
| Phase 2: United States | Highest enterprise willingness to pay. Deepest API developer ecosystem. English-language expert coverage deepest from training phase. Entered second with proven product. |
| Phase 3: Europe | EU AI Act compliance demands traceable, auditable outputs — Clark's architecture satisfies this structurally. Data sovereignty laws favour federated design. German, French, Spanish experts registered before entry. |
| Phase 4: Southeast Asia, Middle East, Latin America | Each new market = train jurisdiction experts + register. The circuit doesn't change. You add bulbs. |
When Clark opens its expert registration protocol to third parties — universities, enterprises, research institutions, practitioners — the knowledge base grows without Clark paying for that growth. Consider: IIT Bombay's materials science department trains a 200M metallurgy expert and registers it to Clark's backbone. IIT Bombay earns 70% of every API call that routes to their expert. Clark earns 30% with zero development effort. The backbone gets smarter. The marketplace deepens. Users gain access to genuine materials science expertise no monolithic model can match. This is market creation.
| Scenario | Year 1 | Year 2 | Year 3 | Year 4 | Year 5 |
|---|---|---|---|---|---|
| Bear | ₹8–12 Cr | ₹60–80 Cr | ₹300–400 Cr | ₹700–900 Cr | ₹1,000–1,500 Cr |
| Base | ₹15–20 Cr | ₹150–200 Cr | ₹800–1,000 Cr | ₹2,000–2,500 Cr | ₹3,000–5,000 Cr |
| Bull | ₹25–35 Cr | ₹250–350 Cr | ₹1,200–1,500 Cr | ₹4,000–5,000 Cr | ₹7,000–10,000 Cr |
Clark's competitive situation is architecturally unusual. The difference between Clark and its competitors is not a matter of degree — it is a matter of kind. OpenAI, Google, Anthropic, and Mistral are all building increasingly capable versions of the same architecture: a single dense transformer model, pre-trained on broad data, fine-tuned for specific behaviours. Clark is building a fundamentally different architecture: a routing backbone plus a registry of detached specialist experts. These two approaches are not on the same evolutionary path. You cannot evolve a dense model into Clark's architecture without rebuilding from scratch.
Strength: brand dominance, developer ecosystem, fastest model iteration. GPT-5.4 pricing: $2.50/1M input tokens — verified at OpenAI Pricing ↗. Structural blindspot: OpenAI is architecturally committed to scaling dense models. Every research team, every infrastructure investment, every product decision assumes that making one model larger and better-trained is the path to better AI. Pivoting to detached MoE would require dismantling the research programme that defines their identity. It is not a pivot they can make without becoming a different company.
Strength: distribution across search, productivity tools, cloud, and advertising. Deepest financial resources. Gemini 2.5 Pro pricing: $1.25–$2.50/1M — verified at Gemini Pricing ↗. Structural blindspot: Google's business model depends on search query volume. A system that replaces search with direct expert-level answers reduces advertising inventory. Google must develop AI within the constraint of not fully cannibalising search revenue. This is a permanent structural conflict, not a temporary tension. Clark has no such conflict.
| Krutrim AI · olakrutrim.com ↗ | India's first AI unicorn (Jan 2024). Krutrim V2: 12B dense model, 22+ Indic languages. API: ₹7–17/M tokens. A 12B dense model vs Clark's 70B backbone + 10,000 experts is a different architecture class entirely. Comparing them is like comparing a single floodlight to a building with 10,000 specialist bulbs. |
| Sarvam AI · sarvam.ai ↗ | Selected for IndiaAI Mission Innovation Centre. Sarvam 105B currently free. Primary focus: speech, voice, translation. Genuinely complementary to Clark — Sarvam handles voice/language communication, Clark handles reasoning and intelligence. Strong potential partnership and expert model contributor. |
| AI4Bharat · ai4bharat.iitm.ac.in ↗ | Open-source NLP at IIT Madras. IndicTrans2, IndicWhisper, all open-source. No commercial managed API — academic research only. Natural candidate for expert model contributions to Clark's registry. Potential strategic partner, not a competitive threat. |
| Scenario 1: Price Undercutting | Respond with cost-per-correct-outcome analysis. A 40% price reduction is irrelevant if Clark's expert routing eliminates 60% of verification overhead. The value axis is different. |
| Scenario 2: Incumbent Announces MoE Product | Examine carefully. Internal MoE (routing between parameter blocks within one model) is fundamentally different from Clark's detached MoE — independently trained, independently certifiable domain experts. Communicate the distinction clearly. |
| Scenario 3: Open-Source Expert Models Proliferate | Clark's value is not in any individual expert model — it is in the backbone's routing quality and the certified registry. Open-source models can contribute to Clark's registry through the federated protocol. Open-source is a supply chain, not a threat. |
| Scenario 4: Key Investor Withdraws Mid-Round | Bridge protocol: founders inject personal capital for 60 days. CFO activates backup investor pipeline maintained with minimum 3 warm alternatives at all times. Round closes with replacement lead. |
| Scenario 5: Regulatory Action Against AI | Clark's architecture is the regulatory solution. Traceable expert outputs satisfy every transparency requirement articulated by AI regulators globally. Regulatory tightening is Clark's competitive advantage. |
| Scenario 6: Talent Poaching | 4-year vesting, competitive equity, IITM proximity for research talent, and mission-level work. The people who build Clark's backbone understand what they are building — that understanding is not easily transferred. |
| Scenario 7: Expert Marketplace Disintermediation | A competing marketplace without Clark's routing quality is a directory of bulbs without a functioning circuit. Backbone routing quality requires years of deployment data — it cannot be purchased or replicated quickly. |
| Scenario 8: Big Tech Acquisition of Indian AI Competitor | Deepen India-sovereign positioning and government customer relationships. Big tech regulatory exposure in India creates structural advantage for India-first infrastructure. |
Clark's primary user is defined by mindset rather than demographics: precision-seeking, efficiency-obsessed, with zero tolerance for confident-sounding incorrect answers. This user has been burned by AI-generated outputs that looked right and were wrong. They have paid the cost — in time, in credibility, in bad decisions — of trusting a system that was not actually reliable. Three defining fears drive their behaviour: the fear of invisible mistakes in plausible-looking outputs; the fear of being outpaced by peers with more capable tools; and the fear of losing ownership of their own thinking process. Clark's architecture addresses all three: traceable expert outputs eliminate invisible mistakes; genuine expert-level depth provides competitive advantage; the system amplifies rather than replaces judgment by showing its reasoning chain.
Enterprise budget holders — CTOs, Heads of Operations, Chief Product Officers — are not buying AI capability. They are buying three things: operational leverage (more output with the same headcount), risk reduction (AI-assisted decisions that can be audited and defended), and strategic competitive advantage (the ability to deploy genuine domain expertise across the organisation without hiring 100 additional specialists). Clark's expert marketplace addresses the third point in a way no competitor can. An enterprise can access 10,000 domain experts simultaneously at a fraction of the cost of maintaining even 10 in-house specialists.
| Trigger 1: High-Stakes AI Failure | A critical project where the AI produces an incorrect output at the moment reliability matters most. The cost becomes immediately concrete and personal. Converts passive dissatisfied user into active buyer. |
| Trigger 2: The Expert Gap Discovery | A user submits a genuinely advanced domain question and discovers the incumbent system produces a confident, plausible, and factually incorrect answer. The system is pretending to expertise it does not have. |
| Trigger 3: Peer Workflow Comparison | Observing a colleague with materially better outputs from a more reliable system. Direct personal comparison. Creates both urgency and clear decision criteria. |
| Trigger 4: Regulatory Audit Trigger | Enterprise receives a regulatory inquiry about an AI-assisted decision and cannot produce a reasoning chain. The compliance gap becomes a procurement event. |
| Trigger 5: Scaling Pain | Team tries to extend AI to specialist domains — legal, medical, engineering — and discovers the current system cannot deliver expert-level depth. Active search for specialised infrastructure begins. |
| Segment | Monthly Revenue | Gross Margin | CAC | LTV | LTV:CAC | Key Retention Driver |
|---|---|---|---|---|---|---|
| Developer self-serve | ₹15,000 | 95% | ₹2,000 | ₹90,000 | 45× | API integration depth — workflows depending on Clark's experts raise switching cost |
| SME API mid-tier | ₹1,50,000 | 93% | ₹15,000 | ₹9,00,000 | 60× | Domain expert quality — SMEs cannot afford in-house experts; Clark provides 10,000 |
| Enterprise annual | ₹15,00,000 | 88% | ₹2,50,000 | ₹1,80,00,000 | 72× | Expert certification + workflow integration make migration structurally painful |
| Govt / PSU custom | ₹60,00,000 | 82% | ₹5,00,000 | ₹7,20,00,000 | 144× | Multi-year empanelment contracts with renewal assumption |
The current market charges 5–10× the actual cost of delivering expert-level intelligence. A chartered accountant charges ₹3,000–15,000/hour for IFRS compliance analysis. Clark's IFRS expert delivers comparable structured analysis at ₹150–500 per query. A patent attorney charges ₹8,000–25,000/hour for prior art analysis. Clark's patent law expert and chemistry expert, co-activated by the backbone for a pharmaceutical patent query, deliver traceable, citable analysis at ₹500–2,000 per query. The gap exists not because technology is expensive — Clark's inference cost for a 300M expert is fractions of a rupee — but because specialist knowledge was previously locked behind years of human specialisation.
| Inefficiency 1: Expert Knowledge Lock-in | Expert knowledge is locked inside individual human minds with limited scaling potential. Clark's expert models scale expert knowledge to unlimited simultaneous deployment at near-zero marginal cost. |
| Inefficiency 2: Multi-Tool Fragmentation | Knowledge workers use 4–8 specialised AI tools for different domains, reconstructing context manually between each. Clark's backbone handles cross-domain coordination automatically. |
| Inefficiency 3: The Reliability Tax | Users of current AI systems spend 30–60% of AI-assisted work time verifying, correcting, and fact-checking outputs. Clark's verification layer eliminates this tax structurally. |
| Inefficiency 4: Language and Jurisdiction Barriers | Global AI systems are predominantly English-language and US-jurisdiction-optimised. Clark's federated expert model allows native-language, native-jurisdiction experts for every market from launch. |
| Inefficiency 5: Insight Without Action Gap | Current systems stop at generating information. They do not decompose it into actionable steps, route to the right expert for each step, or synthesise into a decision-ready format. Clark closes this gap. |
The backbone does not answer questions. It is trained to understand the structure of questions — to recognise what kind of problem has arrived, what sub-problems it can be decomposed into, which specialist domains each sub-problem requires, and how the outputs of those specialists should be synthesised. Think of the backbone as a highly experienced project manager who has worked across every domain of human knowledge. They do not need to know how to perform brain surgery — they need to know enough about brain surgery to recognise when a brain surgeon is needed, to understand the surgeon's output, and to integrate it with the radiologist's and pharmacologist's outputs. The backbone's training data is problem-decomposition patterns, expert-output evaluation criteria, and multi-domain synthesis — not domain content itself.
| Parameter Range | 100M to 500M per expert. Narrow, highly structured domains (specific regulatory frameworks, well-defined mathematical subfields) achieve deep competence at 100M–200M. Broader complex domains (general medicine, full-stack engineering) require 300M–500M. |
| Training Data | Each expert trained exclusively on domain-specific authoritative sources: textbooks, peer-reviewed literature, official regulatory documents, case law, technical standards. No general web crawl. No cross-domain contamination. |
| Certification Protocol | Before registration: domain-specific accuracy benchmarking, out-of-domain refusal tests (the expert must correctly decline questions outside its scope), and consistency tests (same input → same output across 10 runs). All three must pass before production registration. |
| Detachment Principle | Experts have no knowledge of each other. They receive a sub-task from the backbone, execute within their domain, and return their output. Cross-expert coherence is the backbone's responsibility exclusively. |
| Update Protocol | Expert models can be retrained, improved, and updated without affecting the backbone or any other expert. Updating an expert is like replacing a lightbulb — the circuit continues; only the illumination quality of that socket changes. |
| Contribution Protocol | Third-party contributors submit via developer portal. Automated certification → human expert review → staged deployment (alpha → beta → production). End-to-end: 3–4 weeks. Quality is non-negotiable — the marketplace's value depends entirely on the circuit routing to bulbs that actually illuminate correctly. |
| Phase 1 (Mo. 0–6): Backbone Training | 70B backbone trained and validated. First 100 expert models trained internally across core domains. Internal API alpha Month 4. Month 4 benchmark vs GPT-5.4 and Gemini 2.5 Pro. |
| Phase 2 (Mo. 6–18): Expert Expansion & Beta | Expert registry grows from 100 to 1,000+ models. Public API beta launches Month 7. Expert registration portal opens Month 10. Revenue scaling. Backbone routing quality improves with deployment data. |
| Phase 3 (Mo. 18–36): Marketplace & Platform | Expert registry reaches 5,000+ models. Third-party contributors earning real revenue. US market entry. Platform transition: Clark is no longer a product — it is infrastructure other products are built on. |
| Phase 4 (Mo. 36–60): Global Infrastructure Layer | Expert registry approaches 10,000+ models across all major languages, jurisdictions, and domains. Clark is evaluated as a dependency, not a tool. IPO preparation. The circuit is wired into the global economy. |
| GPU Compute · CoreWeave ↗ | NVIDIA H100 SXM5 80GB · $4.76/hr · ₹150/hr at ₹84 FX rate · pure OpEx · Kubernetes-managed autoscaling · 256 GPUs Mo.0–5 · 72 GPUs Mo.6–23 |
| Storage | Hybrid: S3-compatible object store (training data) · PostgreSQL (structured metadata) · Qdrant vector DB (semantic retrieval) · Redis (session cache + rate limiting) · data residency controls for DPDP compliance |
| Languages & Frameworks | Python (ML/research) · Go (inference serving) · Rust (backbone routing engine — microsecond decisions require native performance) · PyTorch (BSD) · Hugging Face Transformers (Apache 2.0) · FastAPI (MIT) |
| MLOps | Weights & Biases experiment tracking · GitHub Actions CI/CD · automated benchmark tests on every checkpoint · deployment gated on accuracy threshold · drift detection for backbone routing quality and expert model performance |
| Expert Registry Infrastructure | Custom registry service managing 10,000+ expert model metadata: domain scope, benchmark scores, version history, contributor attribution, revenue share tracking · routing index optimised for sub-millisecond selection across 10,000+ candidates |
| Model | Scope | Training Start | Key Benchmark | GPU Budget |
|---|---|---|---|---|
| 1B Baseline | General language validation | Month 2 | Perplexity < 15 | ₹55,00,000 |
| 7B Backbone | Problem decomposition capability | Month 3 | MMLU > 60% · decomposition accuracy > 70% | ₹1,85,00,000 |
| 30B Backbone | Expert routing precision | Month 5 | MMLU > 68% · routing accuracy > 82% | ₹4,20,00,000 |
| 70B Backbone (Final) | Full orchestration capability | Month 6 | MMLU > 74% · routing accuracy > 91% | ₹6,80,00,000 |
| First 100 Expert Models | Core domains (law, finance, science, languages) | Month 1–6 | Domain accuracy > 85% per cert. battery | ₹3,50,00,000 |
| Primary Reference · MeitY PDF ↗ | Digital Personal Data Protection Act 2023. Data fiduciary obligations, consent management, data localisation, Data Protection Board reporting. |
| Data Localisation | Indian user data processed and stored within India. Indian-jurisdiction experts run on Indian infrastructure. Structurally compliant — not procedurally retrofitted. |
| Audit Trails | All data access events logged with purpose, actor, and timestamp. Right to erasure: cascading deletion across all storage layers within 72 hours. Implemented as first-class API endpoints. |
Clark's routing infrastructure is stateless — backbone model weights are read-only after training. Every routing request can be handled by any available backbone inference node without session-specific state. Horizontal scaling = adding nodes with no architectural changes. Expert models are served through a similar stateless layer. Only a small subset (3–12 experts per query) activates per request, keeping per-query compute cost low regardless of total registry size.
| Expert Index Architecture | A custom vector index maps problem descriptions to expert candidates with sub-millisecond lookup time. Scales to 100,000+ experts without performance degradation. |
| Expert Model Serving | Each expert is an isolated inference service with a defined API contract. Can be scaled independently based on demand. A popular IFRS expert during quarterly reporting scales up without affecting any other expert. |
| Expert Quality Monitoring | Continuous monitoring for accuracy drift, scope adherence, and consistency. Quality degradation triggers automatic routing downgrade and contributor notification. |
| New Expert Onboarding | Automated certification (72 hours) → human expert review (1–2 weeks) → staged deployment (alpha → beta → production). Total: 3–4 weeks end-to-end. |
| Scale | Active Users | Monthly Revenue | Infrastructure Cost | Gross Margin |
|---|---|---|---|---|
| Month 7 (Beta) | 100 | ₹2.0 L | ₹1.81 Cr (GPU fixed cost dominates) | Negative |
| Month 12 | 10,000 | ₹80.0 L | ₹2.68 Cr | ~70% |
| Month 18 | 100,000 | ₹3.20 Cr | ₹2.75 Cr | ~86% |
| Month 24 | 500,000 | ₹6.00 Cr | ₹2.77 Cr | ~88% |
| Year 5 | 50,000,000 | ₹417 Cr | ₹140 Cr | ~91% |
| Spoofing | JWT auth · API key rotation · OAuth 2.0 SSO · device fingerprinting |
| Tampering | HMAC verification · cryptographic integrity on expert model weights · immutable audit logs |
| Repudiation | Non-repudiable audit trail with cryptographic timestamps for all data operations and expert activations |
| Information Disclosure | AES-256 at rest · TLS 1.3 in transit · column-level encryption for PII · zero-trust network segmentation between expert models |
| Denial of Service | Per-key rate limiting · circuit breakers · queue buffering · auto-scaling · DDoS protection at network edge |
| Elevation of Privilege | RBAC with minimum permissions · zero-trust architecture · quarterly access review · privileged access workstations for admin |
| Milestone | Target Month | Outcome |
|---|---|---|
| SOC 2 Type I Initiated | Month 9 | Audit firm engaged · gap analysis complete · controls documented |
| SOC 2 Type I Received | Month 12 | Certificate issued · used in enterprise sales process |
| SOC 2 Type II Initiated | Month 12 | 6-month operational evidence period begins |
| SOC 2 Type II Received | Month 18 | Enterprise procurement requirement satisfied |
| ISO 27001 Certification | Month 24 | Global enterprise procurement requirement satisfied |
Clark's pricing is anchored to the economic value of expert-level, verifiable output — not tokens consumed. If Clark's IFRS expert replaces ₹15,000/hour chartered accountant time for structured compliance analysis, pricing is anchored at ₹1,000–5,000 per complex query — a fraction of the displaced value, while maintaining 85%+ gross margin. This reframes the conversation entirely: users are not comparing Clark's price to OpenAI's price per million tokens. They are comparing Clark's price to the professional services cost it replaces. Clark wins that conversation decisively.
| Free Tier | Limited backbone queries · access to 50 general-domain experts · no certified premium specialists. Conversion trigger: user encounters a task requiring specialist depth beyond the free tier ceiling. |
| Growth Tier (₹300–800/month) | Full expert registry · 50,000 backbone-routed tokens/month · Target: India's 250M students and 60M knowledge workers needing genuine specialist knowledge. |
| Pro Tier (₹2,000–5,000/month) | Unlimited expert registry · priority routing · persistent context memory · API access · custom expert request queue. |
| Enterprise Tier (₹8,000–40,000/month) | Custom expert model development · dedicated backbone allocation · SLA guarantees · audit trail exports · dedicated CSM · custom integration. |
| Subscription Revenue | Recurring MRR from Growth, Pro, and Enterprise tiers. Predictable foundation. |
| API Usage Revenue | Per-token and per-expert-activation pricing for high-volume programmatic access. Scales without proportional cost increase. |
| Expert Marketplace Commission | 30% of revenue from every third-party expert model activation. Scales without Clark's development investment. Expected to be the largest revenue stream by Year 5. |
| Custom Expert Development | Enterprise pays Clark to train domain experts on proprietary data. ₹5L–50L per project. Creates structural dependency — the custom expert can only be deployed through Clark's circuit. |
| Professional Services | Integration consulting, enterprise deployment, expert registry design. ₹5L–50L per engagement. |
| Expert Certification Services | Domain experts and enterprises pay for certification and quality review that enables production registry listing. |
| GPU Inference Cost (Training Mo.0–5) | ₹2,76,48,000/month · largest single cost · ends Month 5 · largest fixed expense in company history |
| GPU Inference Cost (Production Mo.6+) | ₹77,76,000/month · 72% reduction from training phase · fixed while revenue scales |
| COGS at Scale | Compute 60% · Storage & networking 20% · Operational overhead 20% · Blended COGS ≈ ₹20–30 per ₹100 revenue |
| Gross Margin Trajectory | Month 7: negative · Month 12: ~70% · Month 18: ~86% · Month 24+: ~88% sustained |
| Expert Activation Cost | ₹0.05–0.25 per expert activation · at 5 experts per average query: ₹0.25–1.25/query · well below query pricing in all tiers |
| LTV:CAC | 8× or higher · Enterprise: 72–144× · Primary LTV lever: expert marketplace creates expansion revenue without proportional CAC |
| Gross Margin | 75–95% depending on tier · developer self-serve at 95% · government custom at 82% |
| Burn Multiple Target | < 1.5× during growth phases · currently higher due to upfront training investment · falls sharply Month 6 |
| NRR Target | 130%+ in Year 3 · primary expansion driver: expert registry growth means users naturally access more experts over time |
| Shareholder | Shares | % Post-Seed | Value @ ₹600 Cr | Notes |
|---|---|---|---|---|
| Founder 1 (CEO — Maurya) | 30,00,000 | 10.1% | ₹13.68 Cr | 4-yr vest · 1-yr cliff |
| Founder 2 (CFO — Krishnaswamy) | 30,00,000 | 10.1% | ₹13.68 Cr | 4-yr vest · 1-yr cliff |
| ESOP Pool | 10,00,000 | 7.6% | ₹4.56 Cr | Expanding to 12% at Series A |
| Seed Investors | 31,57,894 | 35.0% |
| DCF Method | Base case cash flows discounted at 35% WACC. Implied value range: ₹400–800 Crore. Primary sensitivity: expert marketplace revenue materialisation timeline. |
| Comparable Analysis | Sarvam AI Series A at ~₹432 Cr post-money · Krutrim at $1B+ · US AI infrastructure companies at 20–40× ARR. Clark's architectural differentiation supports premium to dense-model-only comparables. |
| VC Method | Expected exit Year 5–7: ₹20,000–60,000 Crore at 10–15× ARR on base-case revenue. Required 10× return on seed implies ₹400–800 Cr current valuation. ₹600 Cr post-money is within range. |
| Reconciled | ₹600 Crore post-money is defensible across all three methods under base-case assumptions. Expert marketplace network effects are the primary upside optionality. |
Clark's most powerful distribution mechanism requires no paid marketing spend: the expert contributor programme. When a domain expert trains and registers an expert model, they receive 70% of every API call that routes to their model. This creates an immediate economic incentive to promote Clark's platform within their professional network. A tax attorney who registers a GST compliance expert and earns ₹50,000/month in passive API revenue becomes an advocate for Clark in every professional conversation. More expert contributors → more professional network exposure → more users discovered → more API revenue → more expert contributors attracted. This flywheel is self-reinforcing and cannot be purchased by a competitor — it must be earned through having the circuit first.
| 1. AI4Bharat · ai4bharat.iitm.ac.in ↗ | Open-source Indic language models as expert contributions. Joint research on Indic expert training. IITM proximity makes this a relationship, not a formal negotiation. |
| 2. ICAI (Institute of Chartered Accountants) | 400,000+ CAs. Expert model certification partnership. Clark's accounting expert registry developed and validated with ICAI technical input. Distribution through ICAI continuing education channels. |
| 3. Bar Council of India | Indian law expert registry certified with Bar Council input. Distribution through state bar associations. Clark becomes the standard AI research tool for the Indian legal profession. |
| 4. Zoho / Freshworks | Clark's backbone embedded into Zoho CRM and Freshworks workflows. Distribution to Zoho's 100M+ users without direct sales effort. |
| 5. Jio Platforms | Bundling Clark's Growth tier with JioFiber and JioAirFiber premium plans. 200M+ potential users at near-zero CAC. |
| Motion 1: Self-Serve PLG | Zero human involvement. Product drives acquisition, activation, and conversion. Users reach first value moment (first expert-routed query) within 5 minutes, convert to paid when hitting the free tier ceiling on a high-stakes task. |
| Motion 2: Inside Sales (SME + Mid-Market) | Account executives manage inbound leads. Average deal: ₹1.5–20 lakh annually. Sales cycle: 2–6 weeks. Qualification: does the prospect have domain-specific needs requiring expert routing depth? |
| Motion 3: Field Sales + Expert Marketplace (Enterprise + Govt) | Dedicated account teams with solution architects. Enterprise deal: ₹20L–3Cr. Government: ₹50L–10Cr. Sales cycle: 3–9 months. Custom expert model development often included as deal accelerator. |
| Data Network Effect | Each query improves backbone routing precision. More queries → better routing → higher quality → more queries. Private to Clark — requires running the production system to generate. |
| Expert Marketplace Network Effect | Each new expert makes the platform more valuable to all users. Platform with 10,000 experts is not 10× more valuable than one with 1,000 — it is exponentially more valuable because cross-domain routing possibilities grow combinatorially. |
| Social Network Effect | Each expert contributor brings their professional network. Each satisfied enterprise customer becomes a reference. Word-of-mouth within professional communities propagates with high conversion rates because domain-specific quality claims are easily verifiable by peers. |
There is a philosophical resonance between Clark's product architecture and its hiring philosophy: Clark builds a system where specialist experts are orchestrated by a generalising backbone. Clark hires specialists orchestrated by a leadership backbone. The hiring principle is the same: hire people who are genuinely expert in a narrow domain rather than generally capable across many. A candidate who 'knows a bit about' ML infrastructure, security, and frontend is less valuable than a candidate who knows ML infrastructure at a depth where they have made original contributions.
| Month | New Hires | Total | Key Roles Added | Monthly Payroll |
|---|---|---|---|---|
| Mo.0 | 3 | 3 | CEO · CTO · CPO | ₹16,50,000 |
| Mo.1 | 4 | 7 | Lead ML Eng ×2 · Research Scientist ×2 | ₹21,30,000 |
| Mo.2 | 9 | 16 | ML Eng ×3 · Data Eng ×2 · Platform Eng · DevOps · Security · Legal | ₹26,10,000 |
| Mo.3 | 11 | 27 | Research Eng ×2 · Finance · HR · QA · PM ×2 | ₹37,90,000 |
| Mo.4 | 10 | 37 | Research Intern ×2 · Frontend · ML ×3 · Data ×2 | ₹47,50,000 |
| Mo.5 | 8 | 45 | DB Eng · Network · Data Sci ×2 · CSM · Marketing · BizDev | ₹55,10,000 |
| Mo.6 | 6 | 51 | UX · Enterprise AE ×2 · Customer Success · Infra Eng | ₹64,30,000 |
| Mo.7 | 6 | 57 | Tech Writer · AE · Content · CS Manager · DevRel | ₹70,50,000 |
| Mo.10 | 8 | 80 | Marketing × 2 · Research Eng ×2 · Analytics · BizDev ×2 | ₹98,90,000 |
| Mo.12 | 3 | 90 | QA · Treasury Analyst · Procurement Manager | ₹1,33,20,000 |
| Mo.17 | 10 | 100 | Research Scientists · ML Engineers · Sales · Operations | ₹1,69,70,000 |
| KPI | Month 12 Target | Month 24 Target | Why It Matters |
|---|---|---|---|
| Monthly Recurring Revenue | ₹80 L | ₹6 Cr | Primary revenue health |
| Expert Models Registered | 500+ | 5,000+ | Marketplace growth and depth |
| Expert Routing Accuracy | > 88% | > 93% | Core architecture performance |
| Backbone Routing Latency (P95) | < 800ms | < 500ms | User experience quality |
| Enterprise Logo Count | 18 | 100+ | B2B adoption velocity |
| Net Revenue Retention | > 110% | > 125% | Expansion revenue health |
| Expert Contributor Revenue Share Paid | ₹5L+ | ₹50L+ | Flywheel activation signal |
| Free-to-Paid Conversion Rate | 8%+ | 12%+ | Monetisation efficiency |
| Gross Margin | 70%+ | 88%+ | Unit economics health |
| Burn Multiple | < 3× | < 1.5× | Capital efficiency |
| Months of Runway Remaining | 18+ | 24+ (post-Series A) | Investor confidence |
| Employee NPS | > 50 | > 60 | Culture and retention |
| Moat 1: Routing Intelligence | The backbone's routing quality improves with every query. After 10M queries, Clark knows with precision which combination of experts resolves which category of problem. This routing intelligence requires years of deployment at scale to develop. No competitor can purchase it — it must be earned. |
| Moat 2: Expert Registry Depth | A registry of 10,000 certified expert models is not a database — it is a decade of curation work. Each model required domain corpus collection, training, certification testing, human expert review, staged deployment, and ongoing quality monitoring. This registry is Clark's deepest IP asset. |
| Moat 3: Workflow Integration | Enterprise customers who integrate Clark's expert routing into compliance workflows, research processes, and decision support systems accumulate switching costs that compound with integration depth. After 12 months of deep integration, migration is not inconvenient — it is operationally disruptive. |
| Moat 4: Federated Contributor Network | Contributors are invested in Clark's success. They have trained models, built professional reputations around their expertise in the registry, and earn ongoing revenue. They actively advocate for Clark. This social investment cannot be replicated by writing a cheque. |
| Moat 5: Patent Portfolio | Provisional patent filed Month 3 covering the hierarchically detached federated MoE architecture, backbone routing protocol, and expert certification system. 15 filings planned across India, US, EU, UK jurisdictions over 3 years. |
| Moat 6: Regulatory Compliance by Design | Clark's federated, traceable, auditable architecture satisfies DPDP, EU AI Act, and enterprise compliance requirements structurally. Retrofitting compliance onto architectures designed without it is expensive, slow, and awkward. Clark's compliance is architectural and therefore permanent. |
Clark's deepest moat is the combination of routing intelligence and expert registry depth. Both are earned through deployment. Both compound with time. Together they create a competitive position that widens every day the system is used.
| Round Size | ₹144 Crore (₹1,440,000,000) |
| Post-Money Valuation | ₹600 Crore |
| Investor Stake | 24% — 3,157,894 new preferred shares |
| Liquidation Preference | 1× non-participating — standard for India seed institutional rounds |
| Anti-Dilution | Broad-based weighted average — full ratchet not accepted |
| Board Composition | 3 seats: CEO + Lead Investor + Independent Director (AI domain expert) |
| Use of Funds | 47.75% deployed across 10 operational categories · 52.25% strategic buffer reserve |
| Series A Trigger | Month 18 · Conditions: 70B backbone deployed · ≥18 enterprise customers · ARR ≥ ₹50 Cr · registry ≥ 2,000 models · SOC 2 Type II received |
| Blume Ventures · blume.vc ↗ | Deep tech focused · Chennai ecosystem presence · Portfolio: Atomicwork, E2E Networks, Neysa · Seed stage thesis alignment |
| Peak XV Partners · peakxv.com ↗ | Led Sarvam AI Series A · Deep AI thesis · High-value-add for enterprise go-to-market |
| Pi Ventures · pi.vc ↗ | Deep tech AI specialist · Active in India AI infrastructure · Highest technical thesis alignment |
| Accel India · accel.com ↗ | 34 deals in 2025 · Active AI portfolio · Strong US enterprise network for Phase 2 |
| Lightspeed India · lsvp.com ↗ | Co-led Sarvam AI Series A · Signals India AI infrastructure commitment |
| Endiya Partners · endiya.com ↗ | Deep tech specialist · SigTuple portfolio · Patient capital aligned with 24-month pre-revenue period |
| "Too early for this architecture" | The architecture is not early — it would have been impossible earlier. All six enabling constraints collapsed simultaneously in 2023–2025. This is not a vision — it is an implementation of what is now buildable. |
| "OpenAI will build this" | OpenAI is architecturally committed to scaling dense models. Pivoting to detached MoE requires dismantling the research programme that defines their identity and abandoning the infrastructure that represents their primary capital investment. |
| "Expert marketplace is unproven" | Three analogous marketplaces have succeeded: iOS App Store, AWS Marketplace, Hugging Face. Clark's marketplace has the additional advantage that experts are complementary, not competitive — more bulbs make the circuit more valuable for every user. |
| "₹600 Cr post-money is too high" | Comparables: Sarvam AI Series A at ~₹432 Cr (single dense model, limited languages). Krutrim at $1B+ (12B parameter model). Clark at ₹600 Cr with 70B backbone in training and a fundamentally different and larger architectural thesis. |
| "Show me traction first" | The backbone is in training. First paying customer: Month 7. The ask is for capital to complete the training run and deploy — not to validate a hypothesis, but to deploy an architecture that is already proven buildable. |
| Slide 1: The Problem | The AI industry optimises for engagement, not correctness. $60 trillion in global productivity loss annually from unreliable AI outputs. |
| Slide 2: The Architecture | The circuit and the lightbulbs. 70B backbone. 10,000 experts (100M–500M params each). 5+ trillion total parameter coverage. |
| Slide 3: Why Now | Six constraints collapsed simultaneously in 2023–2025. Buildable now. Window closes in 24 months. |
| Slide 4: The Market | ₹39.6T TAM across three global layers. India first. Expert marketplace creates the third layer no competitor has. |
| Slide 5: The Product | Live demonstration: IFRS accounting expert vs GPT-5.4. Same query. Traceable chain vs plausible approximation. |
| Slide 6: The Moat | Routing intelligence compounds with queries. Registry depth compounds with contributors. Switching costs compound with integration. |
| Slide 7: Go-to-Market | Expert contributor flywheel as primary distribution. Professional association partnerships. India → US → Europe. |
| Slide 8: Business Model | Six revenue streams. Expert marketplace as primary growth driver at scale. 70/30 revenue share drives contributor growth organically. |
| Slide 9: Financials | ₹144 Cr seed. 24-month runway with 52.25% buffer. Break-even Month 22–24. Year 5 base: ₹3,000–5,000 Cr ARR. |
| Slide 10: The Team | Maurya: built backbone systems on EuroHPC scale before raising money. Krishnaswamy: CA + MCom, economic spine. CPO: product architect. 100 FTEs by Month 17. |
| Accept Quickly | 1× non-participating liquidation preference · pro-rata rights · standard information rights · SAFE if it simplifies close |
| Negotiate Firmly | Board composition — no more than 1 investor seat on 3-person board · anti-dilution — broad-based weighted average only · ₹600 Cr post-money floor · ESOP expansion before investor dilution at Series A |
| Walk Away From | Full ratchet anti-dilution · super-majority approval rights for operational decisions · founder drag-along without investor majority |
| FOMO Engineering | Maintain 3+ warm backup investors throughout raise · communicate term sheet progress to all engaged investors simultaneously · firm close date · every meeting includes competitive tension statement |
| Timeline Target | 8–12 weeks from first institutional meeting to wire |
| Sean Ellis PMF Score Target | 70%+ of active users would be 'very disappointed' if Clark's expert routing disappeared. The specific framing: 'If Clark disappeared and you had to go back to GPT-5.4, how would you feel?' Disappointment rate is the primary signal. |
| Retention Curve Target | Week-8 retention above 60% for active users. Flattens above 40% = genuine retention. Above 60% = habit-forming product. |
| Activation Rate Target | 70%+ of new signups reach the first value moment (first expert-routed query producing an output demonstrably better than a general model) within their first session. |
| Time-to-First-Value Target | Under 5 minutes from account creation to first expert-routed query completing. |
| Organic Referral Rate Target | 30%+ of new users arriving through word-of-mouth or referral from existing users. Expert contributor network should be the largest single referral source. |
| Backbone Training Status | 256×H100 GPU cluster operational at IITM Research Park via CoreWeave. Training data pipeline processing 100B+ tokens. 1B baseline model trained and validated. 7B backbone training begins Month 3. |
| Expert Model Development | First 20 internal expert models under development concurrently with backbone training. Domains: Indian constitutional law, IFRS accounting, differential calculus, organic chemistry, Tamil NLP, Hindi NLP, GST compliance, clinical trial methodology, Python code architecture, structural engineering. |
| Strongest Proof Point | Backbone systems built and tested on EuroHPC-scale infrastructure before raising external capital. Technical credibility that precedes the fundraise is the proof point that matters most to sophisticated technical investors. |
| Biggest Open Question | Month 4 benchmark: does the 70B backbone's expert routing produce materially more reliable, traceable outputs than GPT-5.4 and Gemini 2.5 Pro on the 500-task standardised test battery? |
| Original Thesis (2022) | Build a more reliable LLM through better training data and RLHF. Standard approach. Standard outcome. |
| First Major Pivot | After 18 months: the reliability problem is architectural, not parametric. Making a dense model larger does not make it more reliable for expert-level tasks. |
| Second Major Refinement | The solution is not a smarter router on top of existing models. It is a dedicated backbone trained specifically for routing, with a detached registry of specialist experts. The circuit-and-lightbulb architecture. |
| Third Major Refinement | The backbone must be trained before the expert registry can be built. The circuit must be wired before the bulbs can be attached. This set the development sequence. |
Every major AI regulation enacted or proposed globally — India's DPDP Act, the EU AI Act, the US AI Executive Orders, and emerging ISO and NIST standards — converges on three requirements: traceability (outputs must be explainable and auditable), data sovereignty (personal data must be processed in compliant jurisdictions), and human oversight (high-stakes decisions must have audit trails enabling human review). Clark's architecture satisfies all three structurally, because these properties were designed as core architectural features rather than compliance afterthoughts.
The circuit-and-lightbulb architecture is uniquely suited to regulatory compliance. Every conclusion is tagged to the contributing expert model. Every expert's domain scope is certified and documented. Every reasoning chain is traceable step by step. When a regulator asks 'how did this AI system reach this conclusion?' Clark's answer is not a probabilistic explanation — it is a specific, ordered sequence of expert contributions with documented provenance. No dense monolithic model can provide this.
| Regulation | Jurisdiction | Key Requirement | Clark's Architecture Response | Source |
|---|---|---|---|---|
| India DPDP Act 2023 | India | Data localisation · consent management · data fiduciary obligations | Federated architecture: Indian user data processed on Indian infrastructure. Indian jurisdiction experts run in India. DPDP compliance is structural. | MeitY PDF ↗ |
| EU AI Act 2024/1689 | EU | Transparency · traceability · human oversight for high-risk AI | Expert attribution on all outputs. Every conclusion tagged to contributing expert model with confidence level. Satisfies Article 13 structurally. | EUR-Lex ↗ |
| DPIIT Startup India | India | Tax exemptions for recognised startups | Section 80IAC and 56(2)(VIIB) applied at Month 0. Recognition certificate obtained before seed close. | Startup India ↗ |
| IndiaAI Mission | India | 10,000+ GPU compute · Innovation Centre empanelment | Clark targets Innovation Centre cohort 2 application at Month 6. | PIB Official ↗ |
| Data Classification at Ingestion | All incoming data classified: public (no restrictions), pseudonymous (anonymisation required), personal (consent-gated, DPDP rights apply), sensitive personal (additional protections). Classification enforced architecturally, not procedurally. |
| Expert Model Data Isolation | Each expert trained on domain-specific data only. An Indian constitutional law expert trained on legal corpus data has no access to any user's query history. Cross-domain contamination architecturally impossible. |
| User Data Rights Implementation | Right to access: API endpoint returning all stored user data in machine-readable format. Right to erasure: cascading deletion across all storage layers within 72 hours. Right to portability: JSON-LD export. All three implemented as first-class API endpoints. |
| DPDP Rules 2025 | Digital Personal Data Protection Rules notified November 13, 2025. Data Protection Board registration completed at incorporation. Data fiduciary obligations documented in Privacy Notice v1.0. |
| Prohibited Use Cases | Expert routing systems will not be built or certified for: autonomous weapons targeting, mass surveillance, discriminatory housing or lending, disinformation generation, content targeting at minors, or any use case where expert output directly determines a legal outcome without human review. |
| Bias Detection Programme | Every expert model evaluated on domain-specific fairness benchmarks before certification. Models demonstrating demographic disparity above 5% blocked from production registry. Quarterly re-evaluation of all production experts. |
| High-Risk Domain Policy | Expert models for medical diagnosis support, legal advice, and financial regulatory compliance must include explicit uncertainty quantification, and clear recommendations for professional human review where confidence falls below 85%. These are assistants to experts, not replacements. |
| Red Teaming Protocol | Monthly adversarial testing: attempts to extract training data from experts, route beyond certified scope, generate harmful outputs through multi-expert synthesis. Findings inform backbone routing guardrails and expert certification updates. |
India is not simply 'a large market.' It is the optimal calibration environment for Clark's architecture. Cost sensitivity forces genuine efficiency — you cannot hide behind compute brute-force when your customer base has a maximum individual willingness to pay of ₹500–2,000/month. This constraint forces Clark to build the most efficient possible routing architecture, and that efficiency becomes a structural competitive advantage when the platform expands to higher-paying markets where competitors are still paying for inefficiency.
India also provides the linguistic and jurisdictional diversity that makes the detached MoE architecture's breadth claim credible from day one. A system that can handle Tamil legal questions, Gujarati business analysis, Hindi technical writing, and English financial modelling simultaneously — with certified expert depth in each language — is demonstrably more capable than any English-optimised system. India is where the expert registry must prove its multi-lingual depth.
| India AI Market 2026 · NASSCOM ↗ | $15.7B in FY26 · Growing at 35% CAGR · $71B projected by 2030 · Government committed ₹10,372 Cr through IndiaAI Mission |
| Developer Ecosystem · GitHub Octoverse ↗ | 17M+ developers on GitHub · India overtook US in open-source contributor count in 2025 · 57.5M developers projected by 2030 |
| VC Funding · TechCrunch 2025 ↗ | $643M AI VC funding in India in 2025 · $11B total startup funding · Investors becoming more selective = Clark's institutional quality dossier is a competitive advantage |
| Priority Customer Segments | BFSI (₹8,000–40,000/month API): fraud detection, KYC, regulatory compliance. Legal (₹5,000–25,000/month): law firms, in-house legal, judiciary support. Education (₹300–2,000/month): 250M students, IIT/IIM research, coaching institutes. |
| First 100 Expert Models — India Priority | Indian constitutional law · Companies Act 2013 · GST compliance · SEBI regulations · RBI guidelines · NEET/JEE subject experts · IPC/CrPC · All 22 scheduled Indic language NLP experts · Indian medical protocols · CBSE/ICSE curriculum domains |
| IndiaAI Mission Empanelment · PIB ↗ | ₹10,372 Cr over 5 years · 10,000 GPU compute units through PPP · Innovation Centre cohort 2 application target: Month 6 · Non-dilutive compute credits + market validation signal |
| GEM Portal · gem.gov.in ↗ | Government e-Marketplace: primary procurement channel for all government AI services. Empanelment target: Month 12. Ministry of Education and MeitY Digital India as primary target ministries. |
| Ministry of Education | 250M students. National Digital Education Architecture (NDEAR). Clark's education expert registry — mapped to curriculum requirements across 22 languages and 36 state boards — is the natural national adaptive learning infrastructure. |
| Ministry of Law & Justice | 30M+ pending cases in Indian courts. Legal research and case analysis support. Clark's Indian law expert registry — constitutional, commercial, criminal, all 25 high court jurisdictions — addresses the core research bottleneck in case preparation. |
| Primary Talent Source · IITM Research Park ↗ | IIT Madras proximity provides PhD and MTech student access for research intern positions. Adjacent Chennai IT cluster of 450,000+ workers provides engineering talent pipeline at competitive 2026 benchmarks. |
| Salary Benchmarks | Research Scientists: ₹2,75,000–3,50,000/month. Lead ML Engineers: ₹2,20,000–2,60,000/month. ML Engineers: ₹1,60,000–2,25,000/month. DevOps: ₹1,30,000–1,80,000/month. All drawn from NASSCOM Chennai cluster data. |
| Attrition Planning | 12% annual attrition industry average for Indian AI companies per LinkedIn Talent Insights. Mitigation: ESOP vesting, mission-aligned work, competitive compensation, IITM Research Park environment, research publication opportunities. |
| 1. AI4Bharat · IIT Madras ↗ | Open-source Indic language models as expert contributions to Clark's registry. Joint research on Indic expert training protocols. IITM proximity makes this a working relationship rather than a formal negotiation. |
| 2. ICAI (Institute of Chartered Accountants) | 400,000+ CAs. Expert model certification with ICAI technical input. Distribution through ICAI continuing education. Clark becomes the standard AI tool for India's chartered accountancy profession. |
| 3. Bar Council of India | Indian law expert registry certified with Bar Council input. Distribution through state bar associations. Potential for Clark to become the standard AI research tool across the Indian legal profession. |
| 4. Sarvam AI · sarvam.ai ↗ | Complementary architectures: Sarvam handles voice/language infrastructure, Clark handles reasoning orchestration. Integration: Sarvam voice input → Clark expert routing → Sarvam voice output. Combined system covers the full human-AI interaction cycle in 22 Indian languages. |
| 5. Zoho / Freshworks | Clark's backbone embedded into Zoho CRM and Freshworks workflows. Distribution to 100M+ users without direct sales effort. |
| 6. Government e-Marketplace · gem.gov.in ↗ | Single empanelment unlocks 700+ government departments as potential customers. Target: Month 12. |
| 7. IIT Network (All 23 IITs) | Faculty research expert model contribution programme. PhD intern pipeline. Academic credibility accelerating enterprise trust-building. |
| 8. Jio Platforms | Bundling Clark's Growth tier with JioFiber premium plans. 200M+ potential users at near-zero CAC. |
| 9. National Medical Commission | Healthcare expert registry certification. India's 1.4M+ licensed physicians as potential expert contributor base. |
| 10. NASSCOM AI Working Group | Industry standards body participation. First-mover in defining the expert registry certification standard that becomes the de facto benchmark for the Indian AI market. |
| Contributor Journey | Developer portal signup → domain expert model upload → automated certification battery → human expert review → staged deployment (alpha → beta → production) → revenue earning begins |
| Revenue Share | 70% to contributor · 30% to Clark · Monthly payouts via UPI/NEFT · Revenue dashboard with real-time activation counts and earnings |
| Marketplace Economics at Target Scale | 10,000 experts × 100 activations/day × ₹2/activation = ₹20L/day gross · Clark's 30%: ₹6L/day · Annual Clark share: ₹21.6 Crore from marketplace alone · Year 3 target |
| Developer Portal Features | Interactive API documentation · Sandbox with full backbone access for testing · SDK in Python, Node.js, Java, Go · Discord community (target: 10,000 developers by Month 12) · Quarterly DevCon at IITM |
| Quality Standard | No pay-to-play listing. Every expert model certified before production registration. Quality is non-negotiable — the marketplace's value depends entirely on the circuit routing to bulbs that actually illuminate correctly. |
| Languages | Python (ML/research pipeline) · Go (inference serving, performance-critical) · Rust (backbone routing engine, microsecond-level routing decisions) · TypeScript (developer portal frontend) |
| Open-Source Licences | PyTorch (BSD-3) · Hugging Face Transformers (Apache 2.0) · FastAPI (MIT) · LangChain (MIT) · Kubernetes (Apache 2.0) · Qdrant (Apache 2.0). No GPL or AGPL components in any proprietary layer. |
| IP Ownership | All IP developed by founders and employees assigned to Clark AI Private Limited via employment agreements with explicit IP assignment clauses. Provisional patent filed Month 3. |
| Build vs. Buy Decisions | Built: backbone routing engine, expert registry service, verification layer, expert certification pipeline. Bought/open-sourced: foundation model base weights, cloud infrastructure, observability (Datadog), CI/CD tooling. Every build decision required a proprietary advantage open-source could not provide. |
| Known Technical Risks | (1) Routing accuracy below target — mitigated by Month 4 go/no-go benchmark. (2) Expert certification throughput at 10,000-expert scale — mitigated by automated pipeline design. (3) Backbone inference latency at P99 — mitigated by quantisation and caching strategies. |
| Q1: How does the backbone route without knowing expert registry contents in advance? | The backbone is trained on problem-decomposition patterns and domain classification, not on the contents of specific expert models. It generates a semantic routing query; the expert index resolves this to specific model endpoints. Backbone and registry interact through a stable interface contract. |
| Q2: How is expert scope enforced — how does a legal expert not respond to a medical question? | Each expert is certified with an out-of-domain refusal test during the certification battery. Models that respond outside their certified scope fail certification. The backbone's routing also constrains query routing to certified domain boundaries. |
| Q3: What prevents an expert model from hallucinating within its own domain? | Domain-specific accuracy benchmarking against ground-truth answers from authoritative sources during certification. Confidence calibration trained explicitly. Outputs below a confidence threshold flagged with explicit uncertainty markers rather than delivered as confident assertions. |
| Q4: How does the verification layer work between experts? | Verification layer receives all expert outputs simultaneously. Checks: (a) internal consistency within each output, (b) logical coherence across expert outputs, (c) absence of contradictions on points where domains overlap. Detected contradictions routed back to backbone for re-synthesis. |
| Q5: What is the latency budget for a multi-expert query? | Target: under 3 seconds for a 5-expert parallel activation. Expert activations run in parallel (not sequential). Each 300M expert generates response in 200–400ms. Verification: 100–200ms. Backbone synthesis: 300–500ms. Total: 600–1,100ms expert layer + 300–500ms synthesis = 900–1,600ms. Within target. |
Clark does not try to sound like the future. It tries to become indistinguishable from how serious work gets done. The brand is built around four properties: precision (every word used is the correct word, no superlatives), reliability (the brand voice is as consistent as the system's outputs), depth (never shallow, never trend-chasing), and clarity (complex architecture explained in terms a tenth-grader could follow and a domain expert would not find imprecise).
| Name Origin | From the historical figure of the clerk — the person responsible for structuring knowledge and enabling action within institutions. Quiet infrastructure. Functional excellence. The person who made the organisation actually work. |
| Category Name | 'Intelligence Infrastructure Systems' — not AI assistant, not LLM platform. Infrastructure that other systems are built on. The layer beneath, not the interface above. |
| Brand Voice | Precise. Confident. Never breathless. The brand speaks the way the system works: structured, reliable, clear. One rule: if you would be embarrassed saying it to a domain expert in their field, don't say it. |
| Visual Identity | Clean, structured, light. No gradients that evoke vaporware. Typography-driven design that conveys the primacy of language and structure. The interface disappears; the intelligence remains. |
| Trademark Strategy | 'Clark AI', 'Clark Intelligence Infrastructure', and 'Intelligence Infrastructure Systems' trademark filings in India, US, EU, UK. Priority: Month 3 concurrent with patent application. |
| First 4 Hours — Any Incident | Acknowledge (internally within 15 minutes, publicly within 1 hour where legally required) → Contain → Investigate → Communicate (factual, direct, without speculation) |
| Technical Incident Protocol | CEO is technical spokesperson for all AI system failures. No employee social media statements without CEO approval during active incidents. Status page updated within 5 minutes of incident detection. |
| Data Incident Protocol | DPDP Act requirement: notify affected data principals within 72 hours of breach detection. Incident response playbook pre-written, legally reviewed, and stored in data room. |
| Bad News Protocol | Bad news communicated to investors within 24 hours of materialising. Never let investors read it elsewhere first. Format: what happened, what we know, what we are doing, what we need. No spin. |
| Proactive Reputation Management | Regular technical writing by Maurya on backbone architecture, routing quality, and verification methods. Targets: The Ken, Analytics India Magazine, ACM/IEEE conferences, LinkedIn. |
| Time-to-First-Value Target | Under 5 minutes from account creation to first expert-routed query completing with a demonstrably better output than a general model would provide. |
| Activation Flow | 1. Account creation. 2. Domain selection — 'What is your primary work area?' → routes to 5 recommended experts. 3. Pre-populated example query for selected domain. 4. Expert response with attribution visible. 5. Comparison toggle showing same query through general backbone only. 'Wow moment' designed at Step 4. |
| Customer Health Score | Query frequency (30%) · expert diversity accessed (25%) · output acceptance rate (25%) · API integration depth (20%). Score below 40 triggers CSM outreach within 24 hours. |
| Churn Prediction — Five 90-Day Signals | (1) Reduced query frequency → (2) reversion to general backbone → (3) expert scope narrowing → (4) API call decline → (5) support ticket increase. First signal detected → 72-hour CSM contact. |
| NRR Mechanics | NRR = (Starting ARR + Expansion − Contraction − Churn) ÷ Starting ARR. Target: 130%+ in Year 3. Primary driver: expert registry growth means users naturally access more experts over time, increasing API usage organically without upsell effort. |
| Upsell Triggers | Usage approaching tier limit · New expert category added in user's adjacent domain · Enterprise team size growth detected · Custom expert model development inquiry (highest expansion revenue item) |
| Best Expansion Revenue Signal | An enterprise dissatisfied with existing expert quality is the most motivated custom expert development buyer. Dissatisfaction converts into a premium revenue event. |
| Training Phase Footprint | 256 × H100 × 700W TDP × 6 months ≈ 130 tonnes CO₂ equivalent. Offset through verified carbon credits (Gold Standard or Verra VCS) purchased concurrent with training initiation. |
| Inference Phase Efficiency | 72-GPU inference fleet is 72% smaller than training fleet. Clark's routing architecture ensures only 3–12 experts activate per query vs activating entire model in a dense system. Per-query energy use is a fraction of equivalent monolithic inference. |
| Green Compute Target | Year 2: migrate inference workloads to renewable-energy-powered colocation. Tamil Nadu has 13GW+ renewable capacity. Carbon-neutral operations target: Year 3. |
| Hardware Lifecycle | No owned GPU hardware — rented OpEx, zero disposal liability at fleet end. Server hardware at end of life: WEEE-compliant disposal through certified recycler. |
| Student Access Policy | Free tier permanently maintained for students in government schools and undergraduate institutions. Partnership with DIKSHA (Ministry of Education's digital platform) for classroom deployment at zero cost to government schools. |
| Expert Contributor Economic Inclusion | The expert marketplace creates a new income stream for domain experts who previously had no mechanism to monetise specialised knowledge at scale. A retired judge contributing an Indian criminal procedure expert earns passive income from every law student who routes queries to their model. |
| Indic Language Commitment | Certified expert models in all 22 scheduled Indian languages, including lower-resource languages: Santali, Bodo, Dogri, Maithili. Not because the market demands it — because the infrastructure should serve everyone. |
| Red Teaming Programme | Monthly adversarial testing: prompt injection attacks on expert routing, attempts to extract training data from expert models, multi-expert synthesis attacks targeting harmful output generation. Findings update backbone routing guardrails and expert certification requirements. |
| Ethics Review Board | 5 members at Series A: 2 Clark executives + 1 external AI ethics researcher + 1 rotating domain expert + 1 legal/regulatory expert. Authority: can halt deployment of any expert model or backbone update pending ethical review. |
| High-Risk Domain Policy | Medical diagnosis support, legal advice, and financial regulatory compliance expert models must include explicit uncertainty quantification and professional human review recommendations where confidence is below 85%. These are assistants to experts, not replacements for experts. |
| Patent Monitoring | Monthly review of AI patent filings from OpenAI, Google, Anthropic, Meta, Mistral, and Indian AI companies. Specific watch: patent claims touching expert routing, multi-model orchestration, or detached MoE architectures. Freedom-to-operate analysis refreshed quarterly. |
| Talent Signal Monitoring | LinkedIn alerts for key technical hires at competitors signalling strategic pivots. 10 Google ML infrastructure hires specialising in multi-model serving is a competitive signal. A hire of 'mixture of experts routing' specialists is a red flag requiring immediate strategic response. |
| Pricing Intelligence | Quarterly review of all competitor pricing pages (all sources verified and linked throughout this document). Price changes at OpenAI or Google trigger immediate value-proposition impact analysis. |
| Expert Marketplace Intelligence | Monitoring for any competitor attempting to build a similar expert registry. Primary signal: job postings for 'expert model curation' or 'domain-specific fine-tuning programme' roles at AI companies. |
| Seed Board Composition | 3 members: CEO (Maurya) · Lead Investor (1 seat) · Independent Director (AI domain expert with deep technical credibility) |
| Reserved Matters — Board Approval Required | New funding rounds · Acquisitions above ₹1 Crore · C-suite hires and terminations · IP licensing agreements · Annual budget approval · Related-party transactions |
| Board Cadence | Monthly calls (60 minutes) · Quarterly full meetings (half-day, IITM Research Park) · Annual strategy session (full day). Board pack sent 5 business days in advance. |
| Committee Structure | Technical Advisory Committee (CTO + Independent Director + 2 external AI researchers): backbone training progress and expert certification standards. Audit & Compensation Committee (CFO + Lead Investor + Independent Director): financial oversight. |
| Investor Update — Monthly (1 page) | Revenue and ARR · Burn and runway · Headcount · Expert registry milestone · Three key wins · Three key risks · One upcoming decision requiring input |
| Bad News Protocol | Bad news communicated to investors within 24 hours. Never let investors read it elsewhere first. Format: what happened, what we know, what we are doing, what we need. |
| Articles of Association | Tailored for deep tech startup. IP protection, expert marketplace governance, international expansion authorisation. |
| Shareholders Agreement (SHA) | Drag-along and tag-along · Anti-dilution (broad-based weighted average) · Information rights for investors holding > 5% · Consent rights for transactions above ₹1 Crore |
| ESOP Plan | SEBI-compliant · 4-year vesting · 1-year cliff · Exercise price at last round valuation · 7.6% post-seed expanding to 12% at Series A |
| IP Assignment Agreements | All founders, employees, and contractors sign IP assignment before first day. No legacy IP owned by individuals. All architecture, models, code, and documentation owned by Clark AI Private Limited. |
| Seed → Series A | Seed investors who maintain thesis alignment are the highest-probability Series A participants. Monthly updates are the primary nurturing mechanism. 'Series A preview' briefing at Month 14 giving seed investors first right of refusal on pro-rata. |
| Series A Readiness Checklist | 70B backbone deployed · ≥18 enterprise customers · ARR ≥ ₹50 Cr · expert registry ≥ 2,000 models · SOC 2 Type II received · Data room current · International expansion plans documented |
| Microsoft / Azure | Intelligence infrastructure acquisition completes Azure's enterprise AI stack. Expert marketplace aligns with Microsoft's enterprise software distribution across Office 365 and Dynamics. Acquisition premium: 15–25× ARR. |
| Google / Alphabet | Clark's expert routing would be complementary to Google Search rather than competitive — search finds the information, Clark's experts reason about it. Resolves the search conflict through ownership rather than competition. |
| Reliance Jio | National AI infrastructure play. India-sovereign intelligence infrastructure aligned with Jio's national scale ambitions. Clark's expert models distributed through JioFiber to 200M+ users. |
| Infosys / TCS | Enterprise AI managed services. Clark's expert marketplace powers the AI consulting layer for India's two largest IT services companies, deployed to their global enterprise client base. |
| Strategic Cultivation Timeline | Begin cultivating relationships with 5+ potential acquirers 3–4 years before any exit event. Ensure Clark appears in all strategic planning conversations for AI infrastructure at each potential acquirer. |
| NSE Main Board · NSE ↗ | Minimum paid-up equity ₹10 Crore · Minimum market cap ₹25 Crore · 3-year operating track record · Positive net worth. Clark targets NSE main board listing at Year 7–8. |
| IPO Trigger Conditions | ARR ≥ ₹500 Crore · Expert marketplace contribution > 40% of revenue · NRR > 130% for 4 consecutive quarters · 3+ years of audited financials |
| IPO Valuation Framework | AI infrastructure at IPO: 10–20× ARR. At ₹3,000 Crore ARR (base case Year 5): implied pre-IPO valuation ₹30,000–60,000 Crore. Seed investors at 24% (pre-dilution): ₹7,200–14,400 Crore. Return on ₹144 Crore seed investment: 50–100× cash-on-cash. |
Clark's ambition is not to be the best AI company. It is to define what the category of Intelligence Infrastructure Systems means — to set the standards by which all future systems are measured, to establish the expert model certification protocols that the entire industry adopts, and to create the open ecosystem through which any qualified entity can contribute expertise to the global intelligence layer.
The expert certification standard that Clark develops internally will, within five years, be proposed as an industry standard through NASSCOM, ISO, and IEEE. Just as TCP/IP is the standard through which all internet traffic flows, Clark's expert model interface specification will be the standard through which all deployed intelligence is routed. This is category ownership at its deepest.
In fifty years, Clark is not a company. It is the infrastructure. The circuit is wired into every institution, every profession, every domain of human knowledge. No one asks which intelligence infrastructure they use — the way no one asks which electrical infrastructure powers their building. Clark is the circuit through which the entire accumulated expertise of human civilisation is made accessible to every person on Earth, in their language, in their context, at the cost of electricity.
| Distributed Ownership Design | Critical technical and operational knowledge is documented, distributed, and independent of any single individual from day one. The circuit must function even if any lightbulb is removed — including the founding team. |
| Governance Continuity | Seed: 3 board members. Series A: 5. Series B+: 7 (add audit chair). IPO: 9 with majority independent. At every stage, governance structure reduces key-person dependency. |
| Mission Institutionalisation | Clark's commitment to Indic language coverage, open expert marketplace, and free access for students is written into the Articles of Association — not dependent on founder presence but embedded in corporate structure. |
| Cat. | Category | Description | 24M Total ₹ | % of Seed |
|---|---|---|---|---|
| GPU | GPU Rental — Training Phase | 256 × H100 SXM5 × ₹150/hr × 720hr/mo × 6 months · CoreWeave · backbone training | ₹16,58,88,000 | 11.52% |
| GPU | GPU Rental — Inference Phase | 72 × H100 SXM5 × ₹150/hr × 720hr/mo × 18 months · expert serving + inference | ₹13,99,68,000 | 9.72% |
| HR | Employee Salaries | 3 founders + 97 employees · AI Research = 30.3% of payroll · Chennai 2026 benchmarks | ₹26,63,70,000 | 18.50% |
| HR | EPF + Gratuity (Statutory) | Employer PF 12% + gratuity · mandatory EPFO compliance | ₹2,23,85,311 | 1.55% |
| HW | MacBook Laptops (M5 Air PRO/STD) | PRO ₹1,49,900 · STD ₹1,19,900 · purchased on hire date · 100 units over 24 months | ₹1,36,39,900 | 0.95% |
| HW | Servers & Storage Hardware | API servers + storage nodes · one-time CapEx · fully depreciated Year 1 | ₹79,15,000 | 0.55% |
| OPS | Incubation / Office / Utilities | IITM Research Park + internet + electricity + facilities | ₹2,35,85,000 | 1.64% |
| OPS | Security / DevOps / SOC 2 | Datadog · SOC2 Type I/II · CI/CD · GitHub Actions · security tooling | ₹2,57,44,962 | 1.79% |
| OPS | Software Licenses | GitHub · Jira · Slack · W&B · Google Workspace · Notion · monitoring | ₹1,21,60,943 | 0.84% |
| DATA | Dataset Licensing | Training datasets · HuggingFace + proprietary · upfront + periodic renewal | ₹1,00,00,000 | 0.69% |
| MKT | Go-To-Market (Sales + Marketing) | Performance marketing · content · sales team · CRM · events · PR · expert contributor outreach | ₹3,32,00,000 | 2.31% |
| LEG | Legal, IP & Risk | Patent portfolio (15 filings) · legal retainer · insurance · regulatory advisory | ₹99,00,000 | 0.69% |
| RES | Market Research & Intelligence | Customer discovery · analyst reports · competitive intelligence · pricing research | ₹1,24,00,000 | 0.86% |
| CS | Customer Success | 6 CSMs + Manager · success platform · training materials · NPS tooling | ₹1,52,50,000 | 1.06% |
| FIN | Finance Operations | Statutory audit · tax advisory · FP&A tooling · data room setup | ₹64,00,000 | 0.44% |
| GOV | Governance & Board | Board operations · legal docs · ESOP administration · investor relations | ₹44,00,000 | 0.31% |
| PART | Partnership & Ecosystem | Developer portal · DevCon events · expert contributor outreach · strategic partnerships | ₹44,00,000 | 0.31% |
| TOTAL PLANNED OPERATIONAL SPEND | ₹68,76,57,116 | 47.75% | ||
| ★ | STRATEGIC BUFFER RESERVE — Emergency · Scale-up · Unforeseen opportunities | Unallocated · structural protection against GPU price increases, timeline slippage, or strategic opportunities | ₹75,23,42,884 | 52.25% |
| TOTAL SEED CAPITAL ACCOUNTED | ₹1,44,00,00,000 | 100.0% |
Clark AI April 2026 · Seed Stage · ₹144 Crore · Hierarchically Detached Federated Mixture-of-Experts