The Web3 grant ecosystem has changed faster than most builders realize. What was an informal system of bounties and ecosystem funds three years ago has matured into a structured, competitive funding environment with defined criteria, milestone-based disbursement, and acceptance rates that rival early-stage venture.
The capital is real and growing. The gap is on the applicant side — most teams applying for grants are builders who have never written a grant application, don't understand what reviewers are evaluating, and underestimate how much the landscape has professionalized since they last looked at it.
This guide maps the major programs, what distinguishes them, and the factors that consistently differentiate successful applications from unsuccessful ones. It is a starting framework, not an exhaustive directory — the landscape shifts quarterly, and the specific positioning of any application depends on details specific to your project.
How to Read the Current Landscape
Before examining specific programs, it helps to understand the structural shift that happened between 2022 and 2025. Early grant programs were permissive and broad — they funded almost anything technically coherent and vaguely ecosystem-aligned. That era is over. The major programs have converged on several shared characteristics.
The Major Programs: A Working Map
Ethereum Foundation — Ecosystem Support Program
The EF shifted from open intake to curated Wishlists and Requests for Proposals in September 2025, pausing open submissions due to application volume. This means you now need to either respond to an active RFP or build a relationship with the program before submitting cold. Individual grants range from small academic stipends to multi-year research funding — standard project grants typically fall between $50K–$250K.
What works: rigorous technical depth, connection to publicly stated priorities, teams with prior open-source contributions to the Ethereum ecosystem. What doesn't: vague ecosystem alignment, applications that could have been written for any chain, business-plan-style proposals without clear public goods framing.
Optimism — RetroPGF and Grants Council
Individual RetroPGF allocations have ranged from a few thousand OP to millions. Grants Council awards are typically smaller and milestone-gated. Success requires documented, measurable prior impact and active governance participation — not promises.
Arbitrum DAO — Various Programs
Uniswap Foundation
Polygon, Solana Foundation, and Emerging Programs
Polygon's Season 2 allocated 35M POL with external Grant Allocators including Eliza Labs and Gitcoin — a model where third-party experts evaluate applications in specific domains (AI, DePIN, gaming). Understanding which allocator is responsible for your category matters as much as understanding Polygon's overall priorities.
Solana pioneered the grant-to-investment hybrid model, where convertible grants become equity-equivalent investments upon milestone completion. This changes the incentive structure significantly — it is closer to early-stage venture than traditional grant funding.
The Factors That Consistently Differentiate Applications
Across all major programs, the applications that succeed share several characteristics that have nothing to do with technical quality.
- Specificity of impact framing. Reviewers evaluate hundreds of applications. The ones that clearly and quickly answer "what changes in the ecosystem if this project succeeds, and how would we measure it" consistently outperform technically superior applications that bury this question.
- Milestone credibility. Your milestone structure signals whether you understand how to execute. Milestones that are too large or too vague indicate inexperience. Milestones that are too granular indicate over-engineering. The right structure demonstrates both ambition and operational realism.
- Prior ecosystem signal. Open-source contributions, governance participation, community engagement, prior deployments — these function as credibility proxies, especially at programs where reviewers have limited time to evaluate each team independently.
- Positioning relative to stated priorities. Programs publish what they want. The best applications read those priorities carefully and frame the work in terms the program has already chosen to use. This is not manipulation — it is communication.
- Budget defensibility. Grant budgets that map clearly to specific deliverables are treated differently from budget requests that appear to cover general team operating costs. Every line should be justifiable in terms of what it produces.
What This Framework Doesn't Tell You
This is a map of the landscape, not a positioning strategy for your specific project. The gap between understanding these programs in general and successfully navigating one of them — with your team's specific background, your project's current maturity, and the ecosystem relationships you do or don't have — is where the real advisory work happens.
The programs change quarterly. Priorities shift. New programs launch. Acceptance rates fluctuate. Working with an advisor who tracks these shifts in real time, rather than consulting a static guide, is the difference between an application that lands in the right window and one that misses it.
The Arch Consulting has advised Web3 protocols and infrastructure teams on grant strategy across major ecosystem programs since 2020. This document is updated periodically and reflects program structures as of Q2 2026.
The gap between frameworks and execution is where advisory work happens. If this raised questions specific to your project, that is what the diagnostic conversation is for.