ChainScore Labs
LABS
Guides

Using Static Analysis Tools for Smart Contract Security

Chainscore © 2025
concepts

Core Concepts of Static Analysis

Foundational principles and techniques for examining smart contract code without executing it to identify security vulnerabilities and code quality issues.

01

Abstract Syntax Tree (AST)

Abstract Syntax Tree is a hierarchical tree representation of source code structure. It breaks down code into syntactic constructs like functions, variables, and control flows.

  • Enables precise pattern matching for vulnerability detection.
  • Tools like Slither parse Solidity into an AST for analysis.
  • Allows analysis of code structure independent of formatting or comments.
  • Why this matters: It is the fundamental data structure that enables automated, deep semantic analysis of contract logic.
02

Control Flow Graph (CFG)

Control Flow Graph models all possible paths execution can take through a program. Nodes represent basic blocks of code, and edges represent jumps or branches.

  • Critical for detecting reentrancy by analyzing call sequences.
  • Identifies unreachable code and complex logical conditions.
  • Visualizes how functions interact and state changes propagate.
  • Why this matters: It reveals the order of operations, which is essential for finding state manipulation and business logic flaws.
03

Data Flow Analysis

Data Flow Analysis tracks how values (especially tainted or user-controlled data) propagate through variables and function calls.

  • Pinpoints where unchecked user input reaches critical operations.
  • Used to detect integer overflows and authorization bypasses.
  • Follows the path of msg.value or msg.sender to ensure proper checks.
  • Why this matters: It directly uncovers input validation flaws and trust boundary violations that lead to exploits.
04

Symbolic Execution

Symbolic Execution analyzes programs using symbolic values instead of concrete inputs to explore many execution paths simultaneously.

  • Tools like Manticore use it to generate test cases for edge conditions.
  • Can prove the absence of certain overflow conditions under all inputs.
  • Models complex constraints and path conditions mathematically.
  • Why this matters: It provides high-confidence verification of invariant properties and finds deep, path-dependent bugs.
05

Taint Analysis

Taint Analysis is a specific data flow technique that marks untrusted (tainted) data sources and tracks if they influence security-critical sinks without proper sanitization.

  • Flags user-controlled parameters reaching call() or delegatecall().
  • Identifies where tx.origin is used for authorization.
  • Helps find cross-contract pollution and message call vulnerabilities.
  • Why this matters: It automates the search for the root cause of most external exploit vectors in DeFi protocols.
06

Pattern Matching & Rule-Based Detection

Pattern Matching involves searching code for known vulnerable patterns or deviations from security best practices using predefined rules or heuristics.

  • Detects common vulnerabilities like unsafe ERC20 approvals.
  • Identifies deprecated Solidity constructs or compiler warnings.
  • Rules can be custom-built for specific protocol standards.
  • Why this matters: It provides fast, scalable first-pass analysis to catch well-known issues before deeper, more expensive techniques are applied.

Choosing and Setting Up Analysis Tools

Process overview

1

Define Your Analysis Scope and Requirements

Identify the specific vulnerabilities and contract characteristics you need to audit.

Detailed Instructions

Begin by defining the analysis scope. Determine if you are auditing a single contract, a complex DeFi protocol with multiple interacting contracts, or a specific vulnerability class like reentrancy or integer overflows. For a lending protocol, your scope must include the core lending logic, oracle integrations, and governance mechanisms. Establish your requirements: Do you need to integrate the tool into a CI/CD pipeline, or is this a one-time manual audit? Must it support specific compiler versions (e.g., Solidity 0.8.x) or custom EVM opcodes? This initial scoping prevents tool mismatch and ensures the selected analyzer can handle the contract's complexity and your operational needs.

  • Sub-step 1: List all smart contract files and their interdependencies.
  • Sub-step 2: Document the target EVM chain (e.g., Mainnet, Arbitrum) and any chain-specific considerations.
  • Sub-step 3: Prioritize vulnerability classes based on the contract's function (e.g., access control for admin functions).
solidity
// Scope example: A simple ERC-20 with minting and pausing. contract MyToken is ERC20, Ownable { bool public paused; function mint(address to, uint256 amount) external onlyOwner whenNotPaused { ... } } // Key requirements: Check for owner privileges, pausing logic, and standard ERC-20 compliance.

Tip: For large codebases, create a dependency graph to visualize contract interactions and identify high-risk entry points for deeper analysis.

2

Evaluate and Select Primary Static Analysis Tools

Compare leading tools based on detection capabilities, integration, and reporting.

Detailed Instructions

Research and compare established static analysis tools. Slither is a powerful open-source framework for Solidity that performs data flow analysis and has a large set of built-in detectors. Mythril uses concolic analysis and taint checking to find security issues. Security-focused linters like Solhint can enforce code standards. Evaluate each tool against your requirements. Check if they detect the vulnerability types in your scope (e.g., Slither's reentrancy-eth, unchecked-transfer). Assess the false positive rate by testing on a known code sample. Consider the output format; Slither provides JSON for CI integration, while Mythril offers more detailed execution traces. For a comprehensive audit, plan to use a primary tool (like Slither) supplemented by a linter (Solhint) for style and basic issues.

  • Sub-step 1: Install candidate tools (e.g., pip install slither-analyzer).
  • Sub-step 2: Run each on a test contract and compare the issues flagged.
  • Sub-step 3: Review the tools' documentation for custom detector support if needed.
bash
# Example evaluation command for Slither slither . --exclude-informational --filter-paths node_modules --json slither-report.json

Tip: The Ethereum Security Tooling Survey and audits from reputable firms often provide benchmarks on tool effectiveness for different vulnerability classes.

3

Configure the Tool Environment and Dependencies

Set up the correct compiler version, resolve imports, and configure analysis parameters.

Detailed Instructions

Proper configuration is critical for accurate analysis. First, ensure you have the correct Solidity compiler version. Most tools rely on the solc binary. Use solc-select or nvm to match the version specified in the contract's pragma (e.g., pragma solidity ^0.8.19;). Next, resolve all dependencies. If the project uses OpenZeppelin or other libraries, install them via npm (npm install @openzeppelin/contracts) or specify the correct remappings for the analyzer. For Slither, you may need a slither.config.json file to set remappings and exclude directories. Configure the tool's analysis depth and timeout settings; for complex contracts, increase Mythril's --max-depth from its default of 12 to 22 to explore longer execution paths.

  • Sub-step 1: Check pragma solidity statements in all contracts.
  • Sub-step 2: Install dependencies listed in package.json or hardhat.config.js.
  • Sub-step 3: Create a configuration file for the primary tool with custom parameters.
json
// Example slither.config.json { "filter_paths": ["node_modules", "test"], "solc_remaps": [ "@openzeppelin/=node_modules/@openzeppelin/", "@chainlink/=node_modules/@chainlink/" ] }

Tip: Run slither-check-erc for contracts claiming ERC compliance to verify they match the standard's specification.

4

Execute Initial Scan and Triage Results

Run the analysis, categorize findings by severity, and filter out false positives.

Detailed Instructions

Execute the configured tool on your target codebase. For Slither, run slither . --exclude-informational. For Mythril, use myth analyze <contract_file.sol> --solc-json remappings.json. The initial output will contain findings of varying severity. Begin triage by categorizing each finding: High (e.g., reentrancy, integer overflow), Medium (e.g., gas inefficiencies, weak PRNG), Low/Informational (e.g., coding style). Manually review each finding in the context of the code. A reported unchecked-transfer may be a false positive if the token is known to revert on failure (like USDC). Use the tool's --exclude-dependencies flag to focus on your code. Document confirmed issues and the rationale for dismissing false positives to maintain an audit trail.

  • Sub-step 1: Run the primary analysis command and save output to a file.
  • Sub-step 2: Map each finding to a line number in the source code for review.
  • Sub-step 3: Create a spreadsheet or use a platform like DefectDojo to track issue status.
bash
# Example Mythril command with increased depth and timeout myth analyze src/Vault.sol --solc-json config.json --max-depth 20 --execution-timeout 60

Tip: For complex findings, use Slither's --print paths option (e.g., --print path-graph) to visualize the data or control flow leading to the vulnerability.

5

Integrate Tools into Development Workflow

Automate security checks using CI/CD pipelines and pre-commit hooks.

Detailed Instructions

To prevent regressions, integrate static analysis into the development workflow. Implement a pre-commit hook using Husky or pre-commit to run a linter like Solhint on staged Solidity files. For continuous integration, add a job to your GitHub Actions or GitLab CI pipeline that executes the primary analysis tool on every pull request. Configure the CI job to fail if new high-severity issues are introduced. Use the tool's JSON output format to generate reports. You can also set up differential analysis; Slither can compare two versions of a contract and report only the new issues (slither . --compare master). This ensures security analysis is a consistent gatekeeper, not just a final audit step, embedding security into the Software Development Lifecycle (SDLC).

  • Sub-step 1: Add a solhint check to package.json scripts and configure .solhint.json.
  • Sub-step 2: Create a .github/workflows/security-analysis.yml file for GitHub Actions.
  • Sub-step 3: Configure the CI step to post findings as a comment or fail the build based on severity thresholds.
yaml
# Example GitHub Actions step for Slither - name: Run Slither Security Analysis run: | pip install slither-analyzer slither . --exclude-informational --exclude-dependencies --fail-high

Tip: Use a security scanner aggregator like smart-contract-sanctuary or a custom script to run multiple tools in parallel and deduplicate findings for broader coverage.

Tool-Specific Analysis Workflows

Starting with Linters and Basic Scanners

Static analysis begins with foundational tools that enforce code style and detect common vulnerabilities without deep execution context. These are essential for establishing a secure baseline.

Key Tools and Their Role

  • Slither: A Solidity static analysis framework. It provides a suite of built-in detectors for issues like reentrancy, uninitialized storage pointers, and incorrect ERC20 interfaces. It is the first tool to run for a quick vulnerability overview.
  • Solhint: A linter for Solidity code. It enforces style guides and best practice rules, such as function ordering and naming conventions, which improves code readability and reduces subtle bugs.
  • MythX: While offering deeper analysis, its quick scan mode provides an accessible entry point for detecting high-severity issues in contracts deployed on networks like Ethereum and Polygon.

Example Workflow

When analyzing a basic ERC20 token contract, start by running solhint to ensure formatting and common pitfalls are addressed. Then, execute slither . --detect reentrancy-eth to check for critical vulnerabilities. This two-step process catches low-hanging fruit before proceeding to more complex tools.

Common Vulnerabilities Detected by Static Analysis

Overview of prevalent smart contract vulnerabilities identified by static analysis tools.

VulnerabilityDetection RateSeverityCommon Tools

Reentrancy

95%

Critical

Slither, Mythril, Securify

Integer Overflow/Underflow

~90%

High

Slither, Oyente, SmartCheck

Unchecked Call Return Values

~85%

Medium

Mythril, Slither, Solhint

Access Control Issues

~80%

Critical/High

Slither, MythX, ConsenSys Diligence

Uninitialized Storage Pointers

~75%

Medium

Slither, Remix Analyzer

Timestamp Dependence

~70%

Low/Medium

Mythril, Securify, Oyente

Gas Limit & Loops

~65%

Medium

Slither, Ethlint (Solhint)

Delegatecall to Untrusted Contracts

95%

Critical

Slither, Mythril

Interpreting and Prioritizing Findings

A systematic process for analyzing and ranking security tool outputs.

1

Classify Findings by Severity and Type

Categorize each issue based on its potential impact and root cause.

Detailed Instructions

First, map each finding to a standard severity level: Critical, High, Medium, Low, or Informational. A Critical finding indicates a direct loss of funds or contract control, like a reentrancy vulnerability. A High finding represents a significant flaw that could lead to loss under specific conditions, such as improper access control. Next, identify the vulnerability type (e.g., reentrancy, integer overflow, logic error). This classification is crucial for understanding the attack vector. For example, a finding flagged as "SWC-107: Reentrancy" should be treated with the highest priority. Use the tool's provided classification, but always verify it aligns with industry standards like the SWC Registry or OWASP Top 10.

Tip: Never dismiss an Informational finding outright; it may reveal poor patterns that could lead to higher-severity issues later.

2

Analyze the Code Context and Execution Path

Examine the specific code location and conditions required to trigger the finding.

Detailed Instructions

Navigate to the exact line of code referenced by the tool. Determine if the vulnerable function is internal/private or external/public, as this affects the attack surface. Check the state variables involved and whether they are updated before or after external calls (the Checks-Effects-Interactions pattern). For a potential integer overflow, verify the data types and the range of possible inputs. Ask: What user role or transaction sequence is needed to exploit this? Is the function protected by a modifier like onlyOwner? For example, a finding in a function guarded by nonReentrant may be a false positive. Trace the execution path manually or using a debugger to confirm the tool's analysis.

solidity
// Example: Analyzing a reentrancy warning function withdraw() public { uint amount = balances[msg.sender]; (bool success, ) = msg.sender.call{value: amount}(""); // External call before state update require(success, "Transfer failed"); balances[msg.sender] = 0; // State update after external call -> TRUE POSITIVE }

Tip: Look for surrounding require statements or conditionals that might mitigate the risk.

3

Assess Exploitability and Impact

Evaluate the real-world likelihood and consequences of the vulnerability.

Detailed Instructions

Exploitability assesses how easy it is for an attacker to trigger the flaw. Consider: Are the required funds or privileges readily accessible? Is the contract on a mainnet with high-value deposits? A publicly callable function with no access control is highly exploitable. Impact measures the potential damage. Would exploitation drain all funds, corrupt critical data, or merely cause a denial-of-service? Quantify the maximum financial loss by examining the contract's ETH/token balances or the value of affected state variables. For a governance contract, impact could be loss of protocol control. Combine these factors to adjust the initial severity rating. A High-severity flaw in an unused, deprecated function may be downgraded, while a Medium flaw in a core vault function should be escalated.

Tip: Use a simple risk matrix: Risk = Likelihood (Low/Med/High) x Impact (Low/Med/High).

4

Verify Findings and Identify False Positives

Confirm the validity of the finding and filter out incorrect alerts.

Detailed Instructions

Static analysis tools can produce false positives. Common causes include: missing context about inheritance, misinterpreting safe library functions (like OpenZeppelin's SafeMath), or analyzing dead code. To verify, write a simple proof-of-concept test in Foundry or Hardhat that attempts to trigger the vulnerability. If the test fails under expected conditions, the finding is likely a false positive. Also, check if the tool has misidentified a variable—for instance, a private variable mistakenly flagged as publicly readable. Compare findings across multiple tools (e.g., Slither and Mythril); consensus increases confidence. Document the reason for dismissing any finding, such as: "Function is internal and only called after a strict validation check."

solidity
// Tool might flag this as an integer overflow, but it's safe due to Solidity 0.8.x function add(uint256 a, uint256 b) public pure returns (uint256) { return a + b; // Safe: Built-in overflow checks in ^0.8.0 }

Tip: Always review the tool's documentation to understand the limitations of its analysis engines.

5

Prioritize and Create a Remediation Plan

Order the validated findings and define actionable fixes.

Detailed Instructions

Create a prioritized list for remediation. Critical and High severity, validated findings must be addressed immediately before any deployment or as an urgent patch. For each finding, specify the remediation action. For a reentrancy bug, the action is to apply the Checks-Effects-Interactions pattern or use a reentrancy guard. For an access control issue, add a modifier like onlyRole. Assign each fix an owner and a timeline. Also, consider the fix complexity; a simple configuration change is prioritized over a complex architectural rewrite. Update the priority if multiple findings are related—fixing a core logic flaw might resolve several downstream issues. Finally, ensure the plan includes re-running the static analysis after fixes to confirm resolution.

Tip: Use a tracking system (e.g., GitHub Issues) with labels for severity, status, and assigned developer.

integration

Integrating Analysis into Development

Embedding security checks into the software development lifecycle to proactively identify and remediate vulnerabilities.

01

Pre-commit Hooks

Automated scanning triggered before code is committed to version control.

  • Run lightweight linters and pattern matchers on staged files.
  • Example: Use solhint or slither in a Git pre-commit hook.
  • Prevents obviously vulnerable code from entering the main repository, enforcing baseline quality.
02

CI/CD Pipeline Integration

Continuous analysis as part of the automated build and test process.

  • Execute full static analysis suites on every pull request or merge.
  • Example: A GitHub Actions workflow that runs Mythril and reports findings as check failures.
  • Provides consistent, repeatable security feedback and prevents regressions.
03

IDE Plugins & LSP

Real-time feedback directly within the developer's coding environment.

  • Integrate tools like Slither or Solidity Visual Developer as VS Code extensions.
  • Highlights vulnerable patterns and suggests fixes as you type.
  • Shifts security left by educating developers and catching issues during initial writing.
04

Custom Rule Development

Tailored detection for project-specific invariants and business logic risks.

  • Write custom Slither or Semgrep rules to enforce internal security policies.
  • Example: A rule to detect improper access control in a custom upgradeable proxy pattern.
  • Extends generic tools to protect against domain-specific vulnerabilities.
05

Post-Deployment Monitoring

On-chain verification to ensure deployed contracts match analyzed code.

  • Use bytecode verification tools to confirm the live contract corresponds to the secured source.
  • Example: Running Slither on verified Etherscan source code for a live contract.
  • Validates the integrity of the deployment and provides assurance for users.

Static Analysis Limitations and FAQs

Static analysis primarily examines code without executing it, which creates inherent blind spots. It cannot reason about runtime state, dynamic data, or complex business logic interactions. For instance, it struggles with oracle manipulation or flash loan attack vectors that depend on specific transaction ordering and pool states. It often produces false positives for benign patterns and misses vulnerabilities requiring specific external calls or off-chain data. The analysis is limited to the code it can see, missing risks in inherited or imported libraries if their source is unavailable.