We share how we built a tool to recover storage layouts of solc
-compiled contracts without the need for source code.
我们分享了我们如何构建一个工具来恢复编译合约的 solc
存储布局,而无需源代码。
We discuss our strategy and pipeline, and highlight some important details around execution and type-checking. We also walk through how we use it in production, compare its output to alternatives, and share some performance metrics.
我们讨论了我们的策略和管道,并强调了有关执行和类型检查的一些重要细节。我们还介绍了我们如何在生产中使用它,将其输出与替代方案进行比较,并分享一些性能指标。
Please note that due to the differences between the bytecode patterns generated by different compilers, our tool supports contracts compiled by other compilers (e.g. Vyper) in a minimal way. We would love the community’s help in extending that, if you think you’re up to the task, please see this issue and feel free to contact our team!
请注意,由于不同编译器生成的字节码模式之间存在差异,我们的工具以最小的方式支持其他编译器(例如 Vyper)编译的合约。我们希望社区能帮助扩展它,如果您认为自己可以胜任这项任务,请参阅此问题并随时联系我们的团队!
Goals and Pipeline 目标和管道
We had three big goals for this project: It had to run without need for human intervention. It had to be fast enough to run on all contracts. It had to be simple enough that it could be extended by developers without the need to learn the entire codebase.
我们对这个项目有三个主要目标:它必须在不需要人工干预的情况下运行。它必须足够快才能在所有合约上运行。它必须足够简单,开发人员可以对其进行扩展,而无需学习整个代码库。
The central question: How do we infer data types from bytecode? We learn about the types of data by how they are used in the running program. Consider the CREATE
opcode: the value it returns is always an address. Most opcodes aren’t this straight-forward, but the principle still applies.
核心问题:我们如何从字节码推断数据类型?我们通过如何在运行的程序中使用数据来了解数据类型。考虑 CREATE
操作码:它返回的值始终是一个地址。大多数操作码都不是这么简单,但原则仍然适用。
To apply this principle, we settled on a symbolic execution approach: we execute the bytecode with symbols (unique objects) in place of the values that would exist when the contract is executed on-chain. We gather information on the operations that are performed on these symbols. It is these operations, and combinations thereof, that tell us about the types involved.
为了应用这一原则,我们确定了一种符号执行方法:我们用符号(唯一对象)来执行字节码,而不是在链上执行合约时存在的值。我们收集有关对这些符号执行的操作的信息。正是这些操作及其组合告诉我们所涉及的类型。
We settled on a three-step pipeline:
我们确定了三步流程:
- Disassembly: We take the contract bytecode and create an executable representation of each opcode that we can feed into the analyzer.
反汇编:我们获取合约字节码,并为每个操作码创建一个可执行的表示形式,我们可以将其输入到分析器中。 - Execution: We execute the sequence of opcodes using a special implementation of the Ethereum Virtual Machine (EVM). The execution builds values as trees based on the operations performed. These values are also moved inside the VM by shifting them around on the stack and writing them to and reading them from memory and storage.
执行:我们使用以太坊虚拟机 (EVM) 的特殊实现来执行操作码序列。执行根据执行的操作将值构建为树。这些值也会在 VM 内部移动,方法是在堆栈上移动它们,并将它们写入内存和存储并从中读取它们。 - Type Checking: We process the value trees to get type information. First, the checker recognizes important constructs, e.g., dynamic array indices, from patterns of operations and lifts them to more clear representations. It then uses inference rules to discover type information. Finally, it combines this type information to deliver the type that makes the most sense for that value.
类型检查:我们处理值树以获取类型信息。首先,检查器从操作模式中识别重要的结构,例如动态数组索引,并将它们提升为更清晰的表示形式。然后,它使用推理规则来发现类型信息。最后,它结合此类型信息,以提供对该值最有意义的类型。
The disassembly is pretty straight-forward, so let’s go into more detail on execution and type-checking.
反汇编非常简单,所以让我们更详细地了解执行和类型检查。
Execution 执行
While the word-based EVM stack is simple to implement, challenges in building a “symbolic EVM” lie in the details:
虽然基于字的 EVM 堆栈易于实现,但构建“符号 EVM”的挑战在于细节:
- Symbolic Memory and Storage: Because our memory locations and storage keys are symbolic, the analyzer has to be very careful to make sure that adjacent reads and writes are handled properly wherever possible.
符号存储和存储:由于我们的存储位置和存储密钥是符号的,因此分析器必须非常小心,以确保在可能的情况下正确处理相邻的读取和写入。 - Execution Completeness: In an ideal world, the analyzer would execute every possible path through the bytecode. Unfortunately, some contracts — e.g.,
Shardwallet
, which originally appeared to hang indefinitely — contain too many branches for this to be practical. Instead, we leverage the fact that correct solidity code conveys the same type information in multiple places. This allows us to exclude potential paths through the code without losing too much information. There are a few cases where this path cutting resulted in us missing some fairly important code paths —CpuFrilessVerifier
is an example where we miss some storage slots entirely — but this can be improved in the future!
执行完整性:在理想情况下,分析器将通过字节码执行所有可能的路径。不幸的是,一些合同——例如,Shardwallet
最初看起来无限期悬而未决的合同——包含太多的分支,这不切实际。取而代之的是,我们利用了这样一个事实,即正确的 solidity 代码在多个地方传达相同的类型信息。这使我们能够在不丢失太多信息的情况下排除代码中的潜在路径。在一些情况下,这种路径切割导致我们错过了一些相当重要的代码路径——这是我们完全错过一些存储插槽的一个例子——CpuFrilessVerifier
但这可以在将来得到改进! - Controlling Memory Usage: Even with limits on the branches, it is possible for code to use too much system memory. To avoid this, we have to keep each operation’s values sufficiently small. Bounding the value tree sometimes means that we throw away information, but it’s a worthwhile sacrifice to ensure analysis actually completes! We ran into this problem with
Indelible
, which would create large enough memory writes that we would sometimes crash.
控制内存使用:即使对分支有限制,代码也可能使用过多的系统内存。为了避免这种情况,我们必须使每个操作的值都足够小。限制值树有时意味着我们丢弃了信息,但为了确保分析真正完成,这是一个值得的牺牲!我们遇到了这个问题,这将产生足够大的内存写入Indelible
,以至于我们有时会崩溃。 - Constant Folding: Although we don’t operate on actual runtime values, Solidity contracts incorporate constant information into the bytecode. Evaluating operations like addition over these constants improves the accuracy of our symbolic EVM memory and storage usage. This allows us to retain more value trees and acquire additional type information.
常量折叠:虽然我们不对实际的运行时值进行操作,但 Solidity 合约将常量信息合并到字节码中。评估这些常量的加法等操作可以提高符号 EVM 内存和存储使用情况的准确性。这使我们能够保留更多的值树并获取额外的类型信息。
Type Checking 型式检查
Implementing type checking has its own challenges:
实现类型检查有其自身的挑战:
- Designing for Extensibility: To easily update type-detection patterns in the EVM, we made inference rules for type checking modular and self-contained. This allows for simple modifications of existing rules and addition of new rules. A general method combines evidence from these rules.
可扩展性设计:为了轻松更新 EVM 中的类型检测模式,我们制作了模块化和独立类型检查的推理规则。这允许对现有规则进行简单修改并添加新规则。一般方法结合了这些规则中的证据。 - Sub-Word Computation: To minimize on-chain storage costs, the Solidity compiler packs multiple values into one storage slot where possible. Our type checker accounts for this by having built-in support for type sizing and packing.
子字计算:为了最大限度地降低链上存储成本,Solidity 编译器尽可能将多个值打包到一个存储槽中。我们的类型检查器通过内置对类型大小调整和打包的支持来解决这个问题。 - Traditional Unification: Inference generates a set of potential types for a value. We use a standard type-checking method to merge these types, either producing a single type or triggering an error. This approach ensures high performance and consistent results.
传统统一:推理为值生成一组潜在类型。我们使用标准的类型检查方法来合并这些类型,要么生成单个类型,要么触发错误。这种方法可确保高性能和一致的结果。 - Retain Information: While the analyzer is the system that actually determines type information and eventual types, we wanted to make sure that downstream components would get rich type information. To this end, we do not implement any “fall-back” rules in the analyzer, and instead provide all the information we have at the boundary.
保留信息:虽然分析器是实际确定类型信息和最终类型的系统,但我们希望确保下游组件能够获得丰富的类型信息。为此,我们不在分析器中实现任何“回退”规则,而是提供我们在边界处拥有的所有信息。
For a more in-depth overview of the structure of the analyzer, please see the code tour in the main repository.
有关分析器结构的更深入概述,请参阅主存储库中的代码教程。
Producing Storage Layouts in Production
在生产环境中生成存储布局
In this section we detail how we use this powerful tool in our production pipeline to make data from unverified contracts available on evm.storage.
在本节中,我们将详细介绍如何在生产管道中使用这个强大的工具,使未经验证的合同中的数据在 evm.storage 上可用。
First, we wrap the Storage Layout Analyzer with a CLI component that processes the output of the analyzer and makes a solc
-compatible storage layout from it:
首先,我们用一个 CLI 组件包装存储布局分析器,该组件处理分析器的输出,并从中制作兼容 solc
的存储布局:
{
"storage" : [/* all storage slot entries */],
"types" : { /* all type definitions */ }
}
This allows us to use the layouts seamlessly in production as if they were coming from verified contracts.
这使我们能够在生产中无缝地使用布局,就好像它们来自经过验证的合同一样。
A great way to showcase the analyzer’s power is to run it on a verified contract and compare its output with the source code-based solc
layout. Let’s do this for Balancer’s Vault contract–you can see its solc
layout on evm.storage:
展示分析器功能的一个好方法是在经过验证的合约上运行它,并将其输出与基于 solc
源代码的布局进行比较。让我们对 Balancer 的 Vault 合约执行此操作——您可以在 evm.storage 上看到它 solc
的布局:
Running our pipeline, we’re hoping to see 12 storage slots with 13 variables (0x03
is packed with two):
运行我们的管道,我们希望看到 12 个存储槽和 13 个变量( 0x03
包含两个变量):
{
"storage":
[
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000000-D",
"offset": 0,
"slot": "0",
"type": "t_bytes32"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000001",
"offset": 0,
"slot": "1",
"type": "t_mapping(t_struct_t_bytes10_t_uint16_t_bytes20,t_struct_t_bytes32_t_mapping(t_bytes32,t_struct_t_bytes20_t_bytes12_t_bytes32)_t_mapping(t_bytes20,t_bytes32))"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000002",
"offset": 0,
"slot": "2",
"type": "t_mapping(t_bytes20,t_bytes32)"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000003-D",
"offset": 0,
"slot": "3",
"type": "t_bytes1"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000003-0x1",
"offset": 1,
"slot": "3",
"type": "t_address"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000004",
"offset": 0,
"slot": "4",
"type": "t_mapping(t_bytes20,t_mapping(t_bytes20,t_struct_t_bytes1_t_bytes31))"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000005",
"offset": 0,
"slot": "5",
"type": "t_mapping(t_struct_t_bytes10_t_uint16_t_bytes20,t_struct_t_bytes1_t_bytes31)"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000006",
"offset": 0,
"slot": "6",
"type": "t_bytes10"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000007",
"offset": 0,
"slot": "7",
"type": "t_mapping(t_struct_t_bytes10_t_uint16_t_bytes20,t_mapping(t_bytes20,t_struct_t_bytes14_t_bytes14_t_bytes4))"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000008",
"offset": 0,
"slot": "8",
"type": "t_mapping(t_struct_t_bytes10_t_uint16_t_bytes20,t_struct_t_array(t_bytes20)dyn_storage_t_mapping(t_bytes20,t_bytes32))"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x00000000000000000000000000000000000000000000000000000000000009",
"offset": 0,
"slot": "9",
"type": "t_mapping(t_struct_t_bytes10_t_uint16_t_bytes20,t_struct_t_bytes32_t_bytes32_t_mapping(t_bytes32,t_struct_t_uint112_t_bytes18_t_bytes14_t_bytes18))"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x0000000000000000000000000000000000000000000000000000000000000a",
"offset": 0,
"slot": "10",
"type": "t_mapping(t_struct_t_bytes10_t_uint16_t_bytes20,t_mapping(t_bytes20,t_bytes20))"
},
{
"astId": -1,
"contract": "",
"label": "sla-0x0000000000000000000000000000000000000000000000000000000000000b",
"offset": 0,
"slot": "11",
"type": "t_mapping(t_bytes20,t_mapping(t_bytes32,t_bytes32))"
}
]
}
The last storage slot detected by the analyzer is 11 (0x0B
), meaning we indeed detected 12 storage slots (starting from 0). We also see slot 3 has two entries, one for each packed variable.
分析器检测到的最后一个存储槽是 11 ( 0x0B
),这意味着我们确实检测到了 12 个存储槽(从 0 开始)。我们还看到插槽 3 有两个条目,每个打包变量一个条目。
Parity with solc
is great, what we would expect. More surprising is that our layouts are actually sometimes much more true to the underlying internal usage of storage slots! Consider _minimalSwapInfoPoolsBalances
:
平价 solc
很棒,我们所期望的。更令人惊讶的是,我们的布局实际上有时更忠实于存储插槽的底层内部使用情况!考虑 _minimalSwapInfoPoolsBalances
:
Seemingly a simple doubly-nested mapping, it’s actually more complex than its “official” Solidity definition. Looking at our analyzer’s output, we see the truth behind what solc
sees as two bytes32
s:
看似简单的双嵌套映射,实际上比其“官方”Solidity 定义更复杂。查看分析器的输出,我们看到了两个 bytes32
s 背后的 solc
真相:
// storage slot 0x7
t_mapping(
t_struct_t_bytes10_t_uint16_t_bytes20,t_mapping(
t_bytes20,t_struct_t_bytes14_t_bytes14_t_bytes4
)
)
The key in the first mapping of this variable is a poolId. Here’s how it’s structured in the code:
此变量的第一个映射中的键是 poolId。以下是它在代码中的结构:
/**
* @dev Creates a Pool ID.
*
* These are deterministically created by packing the Pool's contract address and its specialization setting into
* the ID. This saves gas by making this data easily retrievable from a Pool ID with no storage accesses.
*
* Since a single contract can register multiple Pools, a unique nonce must be provided to ensure Pool IDs are
* unique.
*
* Pool IDs have the following layout:
* | 20 bytes pool contract address | 2 bytes specialization setting | 10 bytes nonce |
* MSB LSB
*
* 2 bytes for the specialization setting is a bit overkill: there only three of them, which means two bits would
* suffice. However, there's nothing else of interest to store in this extra space.
*/
function _toPoolId(
address pool,
PoolSpecialization specialization,
uint80 nonce
) internal pure returns (bytes32) {
bytes32 serialized;
serialized |= bytes32(uint256(nonce));
serialized |= bytes32(uint256(specialization)) << (10 * 8);
serialized |= bytes32(uint256(pool)) << (12 * 8);
return serialized;
}
Our analyzer identified t_struct_t_bytes10_t_uint16_t_bytes20
, which corresponds perfectly to what the devs identified as:
我们的分析器识别 t_struct_t_bytes10_t_uint16_t_bytes20
出,这与开发人员确定的内容完全一致:
20 bytes pool contract address | 2 bytes specialization setting | 10 bytes nonce.
Let’s find out about that other bytes32
, which is derived from a library called BalanceAllocation
:
让我们来了解一下另一个 bytes32
,它派生自一个名为 BalanceAllocation
:
// One of the goals of this library is to store the entire token balance in a single storage slot, which is why we use
// 112 bit unsigned integers for 'cash' and 'managed'. For consistency, we also disallow any combination of 'cash' and
// 'managed' that yields a 'total' that doesn't fit in 112 bits.
//
// The remaining 32 bits of the slot are used to store the most recent block when the total balance changed. This
// can be used to implement price oracles that are resilient to 'sandwich' attacks.
Our analyzer identified the value in the second mapping as t_struct_t_bytes14_t_bytes14_t_bytes4
, which is exactly the 224 bits (112 bits + 112 bits) reserved for the cash
and managed
parts in Balancer’s dev notes, with the remaining 32 bits carrying the most recent block at which the balance was changed.
我们的分析器将第二个映射中的值识别为 ,这恰好是为 t_struct_t_bytes14_t_bytes14_t_bytes4
Balancer 开发说明中的 cash
和 managed
部分保留的 224 位(112 位 + 112 位),其余 32 位携带平衡更改的最新块。
We also wanted to compare against public decompilers. Contract Library by Dedaub is one of our favorites, known for its accuracy. Here’s its output for the same contract:
我们还想与公共反编译器进行比较。Dedaub 的合约库是我们的最爱之一,以其准确性而闻名。以下是同一合约的输出:
// Decompiled by library.dedaub.com
// 2023.09.24 00:29 UTC
// Compiled using the solidity compiler version 0.7.1
// Data structures and variables inferred from the use of storage instructions
uint256 _setAuthorizer; // STORAGE[0x0]
mapping (uint256 => struct_2628) map_1; // STORAGE[0x1]
mapping (uint256 => uint256) _getNextNonce; // STORAGE[0x2]
uint256 stor_3_0_0; // STORAGE[0x3] bytes 0 to 0
uint256 _getAuthorizer; // STORAGE[0x3] bytes 1 to 20
mapping (uint256 => mapping (uint256 => uint256)) _hasApprovedRelayer; // STORAGE[0x4]
mapping (uint256 => uint256) _getPool; // STORAGE[0x5]
uint256 _registerPool; // STORAGE[0x6]
mapping (uint256 => mapping (uint256 => uint256)) map_7; // STORAGE[0x7]
mapping (uint256 => struct_2623) map_8; // STORAGE[0x8]
mapping (uint256 => struct_2624) map_9; // STORAGE[0x9]
mapping (uint256 => mapping (uint256 => uint256)) _getPoolTokenInfo; // STORAGE[0xa]
mapping (uint256 => mapping (uint256 => uint256)) _getInternalBalance; // STORAGE[0xb]
struct struct_2644 { uint256 field0; uint256 field1; };
struct struct_2646 { uint256 field0; uint256 field1; };
struct struct_2624 { uint256 field0; uint256 field1; mapping (uint256 => struct_2644) field2; };
struct struct_2628 { uint256 field0; mapping (uint256 => struct_2646) field1; mapping (uint256 => uint256) field2; };
They get the packed storage slot and the general structure of the storage layout just right, but their identification of the mapping at 0x07
as mapping (uint256 => mapping (uint256 => uint256))
isn’t quite as rich as our coverage, as we discussed above.
他们把打包的存储槽和存储布局的一般结构弄得恰到好处,但他们对 0x07
映射 mapping (uint256 => mapping (uint256 => uint256))
的识别并不像我们上面讨论的那样丰富。
We also checked heimdall-rs
, another popular decompiler:
我们还检查 heimdall-rs
了另一个流行的反编译器:
contract DecompiledContract {
bytes32 public stor_c;
bytes32 public stor_d;
apping(bytes => bytes) public stor_map_b;
mapping(bytes => bytes) public stor_map_e;
mapping(address => address) public stor_map_a;
mapping(address => address) public stor_map_f;
mapping(address => bytes32) public stor_map_g;
//...
}
No doubly-nested mappings were discovered and the types aren’t making a lot of sense.
没有发现双重嵌套映射,并且这些类型没有多大意义。
The example given above is not a benchmark but a hand-picked complex contract out of a handful of contracts that we’ve sampled. Given the nature of solc
and the bytecode patterns that it generates, we are confident that the samples we’ve examined are representative of the tool’s accuracy. Still, we are eager to conduct a more comprehensive benchmark and put that to the test (or have someone from the community run it).
上面给出的例子不是一个基准,而是从我们抽样的少数几个合约中精心挑选的复杂合约。鉴于其性质 solc
及其生成的字节码模式,我们相信我们检查的样本代表了该工具的准确性。尽管如此,我们仍然渴望进行一个更全面的基准测试并对其进行测试(或者让社区中的人运行它)。
Performance 性能
Performance is very important to us. We aim to provide insights into large chains such as Ethereum, BSC, etc., with millions of contracts deployed. We need our analysis tools to be blazingly fast 🦀. To this end, we evaluated runtimes on contracts from Zellic’s smart-contract-fiesta, a dataset of deduplicated smart contract codes that were deployed to Ethereum Mainnet. Here’s a histogram of the results:
性能对我们来说非常重要。我们的目标是提供对以太坊、BSC 等大型链的见解,这些链部署了数百万个合约。我们需要我们的分析工具速度极快🦀。为此,我们评估了 Zellic 的智能合约嘉年华(smart-contract-fiesta)合约的运行时,这是一个部署到以太坊主网的重复数据删除智能合约代码数据集。下面是结果的直方图:
We find that ~20% of contracts can be analyzed in 20ms and a large majority were analyzed in less than 200ms. This includes negligible overhead from communicating with a gPRC server.
我们发现 ~20% 的合约可以在 20 毫秒内完成分析,而绝大多数合约的分析时间不到 200 毫秒。这包括与 gPRC 服务器通信的开销可以忽略不计。
Thanks for reading this post! Please enjoy evm.storage and check out the repo. Join our TG to give us feedback. Lastly, we want to thank Reilabs again for their work on this tool, especially Ara, who also contributed to this post.
感谢您阅读这篇文章!请享受 evm.storage 并查看 repo。加入我们的 TG 向我们提供反馈。最后,我们要再次感谢 Reilabs 在这个工具上所做的工作,尤其是 Ara,她也为这篇文章做出了贡献。