You have momentum. You have partners lined up, liquidity commitments in place, and a community in Discord asking "wen mainnet."
Someone on the team says: "We need an audit right now." Someone else says: "We'll do the audit after launch."
Both of them are wrong.
If you audit too early, you are paying for feedback on code that might change next week. The report doesn't provide a lifetime guarantee. If you launch without one, the market will audit for you and cost you your TVL.
This binary view of security, where you are either "audited and safe" or "unaudited and reckless," is the biggest driver of preventable hacks. Security is not something you only care about right before launch. It is a maturity curve that has to grow alongside your product from the ideation phase to full grown protocol.
This article is the roadmap for the security evolution that needs to take place in every Web3 project.
The Maturity Levels
Level 0: The Prototype
You're early and still working on the overall architecture of your project and how you want it to look. The product surface is still being explored and discovered. Your team is learning what should be on chain vs off chain, what should be configurable, what your immediate priority is, and what is still experimental. Clarity is what matters the most in this stage.
Your job here is to get a clear understanding of your system on:
- ●The core user flows
- ●How funds and critical states change
- ●What assumptions you are making
Write those as a living doc the team updates as the project evolves. This is also where you list likely "future upgrades" so you are prepared and making adjustments for them.
This is also the stage where many people forget about basic key management, as they are not paying too much attention to security. While some devs are actively committing secret API keys or private wallet keys to GitHub, others are using their degen wallet to deploy their app. It's fine if it's just your closed source beta, but some tend to forget and run with it until someone catches it.
When to level up: When the core architecture stops changing every week and you are building something you can stabilize, test, and review seriously.
Level 1: Launch Hygiene
As you grow from the basic MVP to adding more features, you enter Level 1. It is the phase where you are continuously working on programming and adding new features to your smart contract. Your codebase is getting bigger, as is your attack surface.
It is also the phase where you need to focus on extra work making the code clean, stable, and understandable. Most early hacks are not sophisticated ones, they come from minor mistakes that were overlooked because of messy code where basic behavior is unclear, edge cases are not considered, and intent is not documented for every new feature that is added.
- ●Complexity hides bugs — Develop the habit of refactoring code, using clean naming and proper structure. Simplicity is crucial when coding in any language, especially when deploying on the EVM where hacks are everywhere.
- ●Pick proven patterns — and well used libraries, especially for sensitive stuff like auth, roles, and upgrades. OZ and Solady exist for a reason.
- ●Write a small set of tests — that prove the core flows work as intended and that the obvious fail cases revert. Simple function tests without malicious intent (deposit, claim, withdraw), simple user paths (deposit => withdraw, stake => claim => withdraw), and basic access control checks.
- ●Treat deploy and admin keys as high risk — You don't need a multisig yet, but keep them separate from everyday wallets. If the key you deploy with is the same one you're aping memecoins with, you have a problem.
The goal here is not to be sure there are no vulnerabilities in your codebase. It is to make sure your codebase and your approach are solid, so when the time for deeper testing comes, the team doesn't waste time on basic stuff.
When to level up: When the code is stable enough that you can freeze scope for a bit and start trying to break it on purpose.
Level 2: Audit Ready
Now that the scope is stable enough, the team can perform various in house tests and call for an external auditor to review the security implementation. Security in this level should be thought about from the perspective of an attacker who would be trying to make your code do unintended things.
- ●Document the attack surface — Have a clear understanding of which parts of your product are most likely to be prone to security threats. Document the flows, assumptions, edge cases, and failure scope. If you are doing integrations, make sure you've followed their best practices and thoroughly read their docs.
- ●Add fuzzing and invariant tests — on the modules that matter most. Coverage is an inaccurate stat. It simply determines if your tests go through a given path, skipping the fact that you might have tested it wrongly or with the best possible conditions. 100% coverage of happy path tests is worse than 50% coverage that actually tries to break your protocol.
- ●Implement mutation testing — where it fits, so you can prove your tests actually catch broken logic. Mock tests will give you mock results. Stick to the real implementations.
- ●Work closely with integration partners — to validate your assumptions by reading their docs and, if possible, getting the team to sanity check your integration. If you are doing heavy integrations, reach out to them, they know their code best and have seen many projects integrate beforehand.
- ●Run an internal review — where devs are actively trying to break the product they make. They are the ones who know their code best. Set a time window that would force about 300-400 nSLOC per day. During that time, the only goal is to break code. Don't fix bugs the moment you find them, make GitHub issues and continue breaking.
Only after you have done your tests and reviews do you pay for an external audit.
With your internal testing, you should now have a properly defined scope for the external audit. Get only logic contracts in scope. Interfaces can be removed to reduce the nSLOC and the price. The external audit works best when you have reduced most of the noise and are letting experts focus on breaking your assumptions and finding edge cases that you somehow missed.
One more thing, the firm doesn't do the audit, the auditors do. Brands are a big thing in cybersecurity, but the brand is 80% client communication and 20% delivering results. Most firms work with contractors, the same team of 3 can be one price at firm X and 2x more expensive at firm Y, for the exact same guys. Look at the individual auditors, their contest wins, their past experience, and ask for people familiar with the project type you are building.
When to level up: You have a final report. You ship the reviewed version, resolve meaningful findings properly, and the product version that is live is the one you audited.
Level 3: Production Hardened
Level 3 is the shift from "securing smart contracts" to "securing the protocol" as a whole. You are live, you are seeing your product used and functioning exactly as intended, and nothing is breaking. Now that your code is secure, your team is growing, and the attack surface is not just the code but the entire system.
This is where the biggest failures stop being purely contract mistakes and start being control failures. It's about securing the humans, the machines, and the procedures.
- ●Introduce multisigs — and move ownership and high stakes operational powers behind them. High impact changes like upgrades, treasury moves, and role changes should require a higher threshold. Require hardware wallets for all signers.
- ●Allow single roles to pause — Create friction and stop attackers. It is easier to explain downtime to users than to explain why their funds are gone.
- ●Simulate before signing — Develop a habit of transaction simulation and avoid blind signing for high value actions. Spoofing happens quite a lot.
- ●Treat your frontend, DNS, and dependency chain as part of the protocol — One hijacked domain or poisoned dependency update can drain users even when the contracts are fine.
This is also the phase where you start having real social media and community presence. As the team grows, this opens up possibilities for social engineering attacks. It is necessary to have your entire team be very informed about these types of attacks, as the trust of your entire protocol lies there too. It is very common for attacks to originate from a team member who got compromised. Assume the worst and have processes for it.
When to level up: When you have good security policies in place and weird behavior triggers a calm process, not chaos. You can coordinate quickly, execute safely, and ship fixes without worsening the problem.
Level 4: Continuous Security
Now, security is part of the process and not some sort of milestone you are trying to achieve. With every new change and every new update, you follow the security procedures to keep the whole system free from security threats.
However, as you ship changes regularly and the integration surface keeps expanding, drift becomes the main enemy. Small changes over time can create blind spots, and eventually something might slip through. Level 4 is all about not letting that happen, or catching it before it causes any big damage.
- ●Continual internal reviews and a retainer with external auditor — to make sure every new update or feature you ship is audited and safe. Move from "one off audits" to a continuous relationship. A retainer is cheaper than scrambling for availability before every release.
- ●Active automated monitoring — of on chain and off chain changes, splitting the alerts by team and authority level. Implement monitoring that alerts you to invariant violations (e.g., solvency check fails) rather than just "large transactions." Consider building a bot that can automatically pause the contract if it detects a critical invariant failure.
- ●Introduce a bug bounty program — so any security researcher can have a crack at your system and notify you about any flaws before threat actors find them. The more eyes, the better. A $50k bounty is cheaper than a $10M hack.
- ●Keep a lightweight integration drift check — so when oracles, bridges, routers, or other key partners change behavior, you re-validate your assumptions.
Monitoring here doesn't mean alert on everything. It's about sending the right info to the right person or team so they can react in time and aren't buried in noise all the time. This is where good teams start catching problems in private, through follow up reviews, bounties, and monitoring, instead of learning the hard way in public.
