Introduction

Zipline is a permissionless block header oracle from Gasper chains (e.g. Ethereum/Gnosis beacon chains) to EVM chains. It uses fault proofs to ensure that any relayed block roots that have not been finalized by the origin chain will be rejected.

Setup and Goals

Our goal in creating Zipline was to define a protocol that can produce a regularly updated, trusted, source of finalized blocks for the origin chain within the execution of the destination chain.

The sub-goals for this are:

  1. To do so as cheaply as possible with respect to destination chain storage and execution
  2. Keep the protocol participation permissionless with no specialized roles
  3. To minimize the time between blocks finalizing on the origin chain and being accepted as trusted on the destination
  4. To avoid the use of any additional cryptography (e.g. ZKPs) not used by the underlying protocol

Construction

The design of Zipline closely follows that of Optimism Cannon and shares several components. It is comprised of:

  • An off-chain emulator and on-chain contract capable of executing MIPS micro-architecture instructions
  • A verifier program which can run in these environments and verify Casper chain finality
  • A contract that manages pending block headers and open challenges
  • A pre-processor that reads from a Gasper chain and produces inputs to the verifier

It also relies on two types of actors - relayers and watchers.

Relayers are responsible for watching the origin chain and submitting finalized epoch boundary blocks (EBBs) to the Zipline contract on the destination where they enter pending status.

Watchers are responsible for watching both the origin chain and the destination chain and detecting when an EBB submitted to the Zipline contract does not match the EBB finalized by the origin chain for the given epoch. If they observe this they must construct an input payload for the verify function, run the verify function in their emulator to obtain the final state of the trace, and submit both of these to the Zipline contract via the challenge method which will start a challenge game between themselves and the fraudulent relayer.

Both of these roles are permissionless and the protocol guarantees that an honest watcher will always win against an dishonest relayer and visa-versa.

Protocol Architecture

Contributions

The primary innovation of Zipline is the design of a stateless finality client for Casper. This allows client with minimal storage capabilities to follow the finalized chain by receiving only a single update message per epoch. Provable execution of this stateless client is what allows the destination chain to prove block finality.

There were also two additional findings that make implementing the protocol feasible for very large validator sets such as those on the Ethereum and Gnosis mainnets:

1. Strategies to reduce calldata requirements

In a typical challenge game (such as that used for an optimistic rollup bridge) a participant submits an assertion, for example the value of a new state root after some transactions have been applied. To verify this statement observers must have access to all inputs (e.g. transaction data) and to the verification program itself. To ensure these inputs are available they are typically included in calldata on the host chain.

For a Casper header oracle this is problematic because verifying finality requires a lot of data (attestations from every validator) and must be done quite frequently (once per epoch).

We solved this problem by noting two properties of a Casper finalized chain under the assumptions that the chain is live and a (1/3)-slashing event has not occurred:

  1. The state for any recently finalized checkpoint can be reconstructed from public data
  2. There is only a single correct finalized successor checkpoint to any finalized checkpoint

The first point allows the protocol to assume that the state for any recently finalized checkpoint is made available by the origin chain itself. Therefore the trusted checkpoint state (which is one of the main data structures required for verification) does not need to be included in calldata. This is fortunate as for any chain with a large number of validators the size of this structure is over 1GB.

The second point allows us to invert the challenge game. Rather than proving the invalidity of the execution of a finality check we can prove the validity of another update at the same height. Irrespective of the accompanying data (e.g. attestations) there is only a single validate candidate successor checkpoint to a given trusted checkpoint. A result of this is that the accompanying data need not be submitted with each update, it only needs to be included by the challenger if they decide to open a challenge.

Both of these insights reduce the happy path operating costs to only that of receiving and storing a single checkpoint plus some additional metadata. This is far less than the original zipline light-client design and less than validity-proof based design.

2. Combining SSZ Merklization with a pre-image oracle for efficient structured data retrieval

Cannon, the provable execution environment used by Zipline, uses a method called the pre-image oracle for retrieval of arbitrary data by its hash. By writing a hash value into a special slot in memory the pre-image of this hash will appear in another memory range. The emulator itself (both the off-chain and on-chain variants) will effectively pause execution until the host can provide the correct pre-image. From the perspective of the code running in the emulator it has an oracle that will always correctly invert hashes.

Doing so has an associated cost when running on-chain. If the instruction executed by the on-chain arbitrator in the final stage of the challenge game reads from the pre-image oracle memory range it will require a challenge game participant to submit the correct pre-image on-chain before it can conclude. The pre-image must be submitted and hashed within the execution of the destination chain.

Our insight was that it is possible to efficiently read chunks from SSZ Merklized data structures using this pre-image oracle. Given the root hash and an index to the chunk of interest the intermediate nodes in the tree can be read one at a time from the root down. Each intermediate node contains two hashes and the index can be used to determine which side to traverse.

Loading data this way is both memory efficient and ensures the largest size that a challenge game participant would need to submit on-chain is 2 hashes (64 bytes). This strategy is used to read chunks of data from the very large beacon state object.

Background

Casper Finality Gadget

Casper is the finality protocol used as part of the beacon chain protocol. It provides an economic guarantee that once a checkpoint has been finalized it cannot be removed from the canonical chain without slashing 1/3 of all stake is slashed. This 1/3 of stake is the economic security of the finality.

Casper operates on checkpoints which are pairs of epoch block boundaries and epoch numbers. Finalizing a checkpoint serves to also finalize all ancestors of the checkpoint. Validators in Casper vote by signing messages which contain two checkpoints - a source and a target. By signing a source-target pair a validator attests that both are part of the canonical chain to the best of their knowledge.

When 2/3 of validators have attested to the same source-target pair it is known as a supermajority link. A supermajority link with a justified source results in the target also being justified. A supermajority link with a justified source, a justified target, and all intermediate checkpoints also being justified, results in the source being finalized.

A justified checkpoint cannot be un-justified without a slashing event however it is possible for their to be multiple justified checkpoints at the same height. This is unlike a finalized checkpoints for which there cannot be another conflicting checkpoint under the economic security assumptions. This property is important in the design of the Zipline protocol.

For further reading see the Gasper paper.

Optimism Cannon

Cannon is a fault proving framework developed as part of the Optimism stack. It provides a way to write and compile programs such that the result of their execution can be proven on-chain using interactive fault proofs.

Offchain the programs are executed in an emulator. This has access to all registers and memory for the CPU at any point during program execution. The sequence of these memory snapshots for the execution of a program is called a trace. For any initial snapshot there is exactly one correct trace.

Pre-image Oracle

Cannon has a feature that makes it possible to read certain pieces data from the outside world during execution called the pre-image oracle. The program can write a hash into a special location in memory and expect the pre-image of that hash to appear in another memory range. The program will only continue execution if the host correctly inserts this pre-image into memory.

Obviously this is impossible in the general case as hash functions should be irreversible. It can only be guaranteed to work for data that is known to be available in both hashed and un-hashed form. A good example of this would be blocks in the host chain. Both the block hashes and block data are available in the chain history and can reasonable be assumed to be available.

During the final stage of fault-proof execution if the executed instruction is reading from the pre-image oracle output memory range then the contract must be provided with the pre-image data in order to hash it and check its correctness.

Protocol Overview

The Zipline protocol produces a trusted source of finalized checkpoints (block root + epoch) for the origin chain within the execution of the destination chain. Checkpoints can be trusted under the assumption that any incorrect checkpoints can be proven incorrect and that observers are incentivised to do so in a timely manor.

Checkpoints are proven incorrect by proving the correctness of another finalized checkpoint at the same height using the finality client since by design the Casper algorithm will only allow a single chain of checkpoints to finalize. The finality client is too expensive to execute on-chain so instead the result of its execution is proven using fault proofs.

The protocol operates under two phases which we call the happy case and the challenge case.

Happy Case

In the happy case the protocol is exceedingly simple and requires almost no execution or calldata and minimal storage on the destination chain. This makes it very cheap during regular operation which, provided the correct incentives are in place, should be all of the time. The protocol progresses as follows:

  1. A bonded relayer posts a finalized checkpoint (the candidate) to the zipline contract on the destination chain by submitting a transaction
  2. This is stored along with the address of the relayer and the timestamp it was received

After a sufficiently long time has elapsed without challenge the relayer may unbond and the checkpoint is considered trusted. Consumers of the oracle can elect to trust the checkpoint sooner depending on their security threshold.

Protocol Architecture

See analysis chapter for the storage and gas usage for the happy case on an EVM destination.

Challenge Case

As previously mentioned the challenger does not prove the invalidity of the most recent update but rather the validity of an alternative finalized checkpoint at the same height.

A challenge is made by another bonded relayer submitting the following in a transaction on the destination:

  • The ID of the update they are challenging
  • A SSZ serialized ZiplineInput container
  • The terminal memory snapshot and CPU cycle count obtained by executing the verify program with the given ZiplineInput. Note this final snapshot must indicate a successful termination and acceptance of the candidate checkpoint

This initiates a challenge between the original checkpoint relayer (now defender) and the challenger. The challenger must first respond with their own different terminal snapshot and CPU count indicating either an unsuccessful termination or invalid candidate. After this the challenge game plays out as in interactive bisection game over the execution trace to determine which player (or both) submitted an incorrect trace.

If the game terminates in favour of the defender then the challengers bond is reassigned to the defender and the submission is allowed to remain. If it terminates in favour of the challenger then the defenders bond is reassigned to the challenger and the submission is removed.

The below sequence diagram illustrates a fraudulent relayer being successfully challenged and their invalid checkpoint being rolled back.

Protocol Architecture

Casper Stateless Finality Client

We define the idea of a finality client for the beacon chain. This differs from a light-client in that it follows the Casper finality protocol in full and inherits all of its security properties. It also differs from a full node in that it does not store the full beacon state, follow the chain tips or participate in p2p communication with other nodes.

It is recommended to first study the Gasper paper which defines the variant of Casper used by the Ethereum beacon chain. In particular sections 4.1-4.5. Notation from this paper will be used in this section.

Objectives

  • It should only be required to store the absolute minimum data and this should be constant size
  • It should only need to receive an update once per epoch
  • This update should be as small as possible
  • The protocol should inherit the same security as a full node

Preliminaries

A checkpoint in Casper is a 2-tuple consisting of a block hash and an epoch number. A beacon chain block contains a state root. A beacon state contains all of the validator keys and other data required to calculate a shuffling which determines which validators should attest in each slot.

Validators attest to source-target pairs called links where the source and target are checkpoints. They attest by signing the link plus some additional data (slot number, committee number) with their BLS private key. If a link has received attestation by $(2/3)$ of the validator set it is said to be a supermajority link. A checkpoint is said to be justified if it is the target of a supermajority link with a justified source. A checkpoint is said to be finalized if all of the following hold:

  • it is justified
  • it is the source of a supermajority link with a justified target
  • and where all adjacent intermediate checkpoints are also justified.

Design

The design was arrived at by starting with the regular Gasper protocol and removing/compressing parts until the objectives were met.

An iteration starts with an initial trusted checkpoint, $C_t$ which is known to be finalized. Given some candidate checkpoint, $C$, and some additional proof data, the aim of the protocol is to determine if the conditions have been met for $C$ to be finalized. Doing so will require obtaining and verifying enough attestations to form supermajority links which justify $C$ and at least the next direct successor $C'$, as well as the supermajority link to finalize $C$.

Verifying attestations requires two things - the public keys of all activate validators, and the shuffling which assigns each validator to a slot/committee. All data required to verify the attestations for an epoch is contained within the beacon state as of the first block of the epoch. Attestations from multiple epochs will need to be aggregated in order to finalize the candidate checkpoint. This will require computing the shufflings and aggregating keys from multiple beacon states.

A full client stores the beacon state locally and derives future states by applying every block received during and epoch. Since the finality client is stateless it will need to receive relevant pieces of the state each iteration.

Inputs

Inputs to the verifier can be split into two types - trusted inputs and free inputs.

Trusted

Trusted inputs are those which are committed to by the trusted checkpoint (which we can assume is part of a finalized chain). This includes:

  • Beacon state as of the trusted checkpoint. This itself commits to:
    • validator keys, activation status, balances, slash status, etc
    • RANDAO reveals

The trusted values can be used as though they are already known to be part of the finalized chain. The state in particular is useful as it allows access to the full set of validators which can be used to verify attestations in the following epoch.

Free

There are also the free inputs. These include:

  • The candidate checkpoint
  • Attestations
  • State patches

Free inputs can be manipulated by an attacker attempting to convince the verifier that the checkpoint has finalized when it has not. In the section on attacks we argue that an attacker can at best degrade the security threshold by some amount $D$ which is very small compared with the size of the validator set.

State Patches

Recall that the protocol must also be able to compute validator shuffling for subsequent epochs not just the one following the trusted checkpoint. The pieces of state data that changes between epochs which impacts the validator shuffling are:

  • epoch number
  • validator activations
  • validator exits
  • number of new validators
  • RANDAO reveal

A data structure that captures the changes in these fields between two adjacent epochs we term a state patch. These are part of the free inputs to the verify function.

Operation

The basic premise of the finality client is that it starts from a trusted checkpoint and projects ahead a number of epochs in order to verify the signatures on all provided attestations, construct supermajority links, and finalize the given candidate.

A single step of the operation involves retrieving the requisite data from the state in order to compute the validator shuffling and verify the attestations that originate from a single epoch.

Single Step

However a single epochs worth of attestations is not enough to produce the supermajority links needed to finalize a checkpoint. In the best case where the chain is finalizing as fast as possible this requires at least 2 epochs worth and that number may be much more.

The finality client uses state patches to advance the trusted state to future epochs which can then be used to verify attestations originating in those epochs. Note that this patched state will not be identical to the state obtained by a full client following the chain but it will be able to verify valid attestations.

The combined attestations from all epochs are aggregated and the supermajority links created. These supermajority links can then be used to prove the finality of the candidate checkpoint.

Multi Step

Zipline Verifier

The verifier wraps the finality client in a way that can be executed within Cannon. It also abstracts the reading of state data in a way that is compatible with the cannon pre-image oracle.

State Reader

The state reader for the Zipline verifier relies on the pre-image oracle and SSZ Merklization to make state access both memory efficient and cheap. Only the required 32 bytes of the state is loaded at any one time. The SszStateReader illustrates how this strategy is used to retrieve the required state chunks.

Building for baremetal MIPS target

Unlike Optimism which uses Golang for its provable execution Zipline uses Rust. This requires quite a complex build system in order to support the Cannon environment. Custom implementations of syscalls are added to support the special features of the Cannon host environment such notifying the host of successful completion or requesting data via the preimage oracle.

The build system also converts the output of rustc which is an elf executable into a linear MIPS memory array which includes the program as well as the zero initialized stack and heap. This binary file is loaded directly into the MIPS emulator as its initial memory.

Destination Chain Contracts

The system can be cleanly split into 4 major components with well defined concerns. This allows for pluggable components (e.g. replacing the MIPS execution with another instruction set like RISCV) and reusing existing implementations.

classDiagram
class Executor {
    +step(snapshotHash) newSnapshotHash
}

class ExecutorIO {
    +readOutput(snapshotHash) data
    +writeInput(snapshotHash, input) newSnapshotHash
}

class GCOrchestrator {
    +newChallenge(startAndEndSnapshots, challengeParams) challengeId
    +dissectChallenge(challengeId, faultAssertion, newSnapshots)
    +proveChallenge(challengeId, faultAssertion)
    +timeoutChallenge(challengeId)
}

class Zipline {
    +submit(epochNumber, updateHash, blockRoot)
    +challenge(submissionId, finalSnapshotHash, nSteps)
    +finalize(submissionId)
}   

Zipline --|> GCOrchestrator
GCOrchestrator --> Executor
Zipline --> ExecutorIO

Executor

interface IExecutor {
    function step(bytes32 snapshotHash) external returns (bytes32 nextSnapshotHash);
}

The executor is responsible for performing the one-step execution required for fraud proofs. It makes no assumptions about the underlying micro-architecture, only that it can take a commitment to an execution trace snapshot and deterministically transition this to a new snapshot commitment.

How the instructions, registers, memory, etc are read from the snapshot given its commitment is an implementation detail. In the MIPS implementation this requires posting Merkle tree nodes on-chain so they are available to construct proofs into the snapshot. If the data required to construct a proof is not available the step function will revert.

ExecutorIO

interface IExecutorIO {
    function readOutput(bytes32 snapshotHash) external view returns (bytes32 output);
    function writeInput(bytes32 snapshotHash, bytes32 input) external returns (bytes32 nextSnapshotHash);
}

Similar to the executor this contract provides functions to find the next snapshot hash after writing to designated locations in memory and also to read other designated locations. Similarly to the above this will likely require data being made available to support proofs into the snapshot. Reading the output should also check that the execution has signalled termination of the program.

GCOrchestrator

interface IGCOrchestrator {
    function newChallenge(
        bytes32[2] memory startAndEndSnapshots,
        uint256 nSteps,
        address defender,
        address challenger
    ) internal returns (uint64);

    function dissectChallenge(uint64 challengeId, FaultAssertion calldata assertion, bytes32[] calldata newSnapshots)
        public;

    function proveChallenge(uint64 challengeId, FaultAssertion calldata assertion)
        public;

    function timeoutChallenge(uint64 challengeId) public;

The challenge orchestrator abstracts any bridge related details and reduces a challenge down to an assertion of a start and end snapshot commitment with respect to an executor. This assertion is either true or false and once a challenge has been started it will eventually conclude by either completing the interactive game and executing a one-step proof, or by either party failing to move in time resulting in a timeout.

It holds the state of open challenges including the participants and which must move next, time since last update, and a hash that commits to the challenge state. Using a commitment to the challenge state reduces the storage requirements but requires the challenge game participants to resubmit the prior step challenge state each time.

Unlike the original Cannon challenge game the Zipline game supports splitting the trace into multiple sections (e.g. k-section) per round rather than just bisecting it in half (i.e. \(k=2\)). This is a very favorable trade-off as the number of round is proportional to \(\log_k(N)\), where \(N\) is the number of instructions in the trace. The demo currently runs with \(k=15\) but an optimal value should be selected for a production deployment once the average trace length is known.

Zipline

interface IZipline {
    function trustedCheckpoints(uint64 epoch) external (bytes32);
    function submit(uint64 epoch, bytes32 blockRoot) external;
    function challenge(
        uint64 epoch,
        bytes32 rivalBlockRoot,
        bytes calldata proofData,
        bytes32 finalSnapshot,
        uint256 nSteps
    ) external;
    function finalize(uint64 epoch) external;
}

This is the only component that is specific to the Zipline protocol. The rest can be used to perform fraud-proofs on any kind of execution.

It keeps track of the relayed submissions, the stake assigned to these submitting, which block roots are pending and finalized and any open challenges.

It is permissionless to submit new candidate blockRoots for consideration but these require stake above a certain threshold in order to finalize. Once staked upon the stake cannot be removed from a submission until it is either finalized or has been proven fraudulent (in which case the stake is returned to the challenger).

It is also permissionless to finalize a submission once the fraud proving window has elapsed.

The Zipline contract stores and makes available all finalized block roots for consumption by other protocols (e.g. token bridges) via the trustedCheckpoints map.

Protocol Properties

We instantiated Zipline for two networks. The Ethereum spec tests and the Ethereum mainnet. For both of these we measured the number of instructions required to verify one epoch finality as well as the gas cost for submitting an update and for initiating a challenge.

Emulation time was measured on a 2023 Macbook M2 with 24GB of ram.

Spec Tests

Network Params

Validators256
Attestations96

Results

MIPS instructions15497874166
Time to emulate160s
Input size38KB
Gas to submit93843
Gas to challenge5124231
Gas per dissection2981624

Mainnet

TBD

Known Issues and Attacks

The implementation provided is a proof-of-concept only and has a number of known issues that make it unsuitable for production deployment at this time.

Delayed Chain Finality

The stateless finality client currently operates on the assumption that the origin chain is finalizing every epoch according to the 1-finality rule. While this is the case the vast majority of the time there are instances where chain finality can be delayed. This was recently seen in Ethereum mainnet due to a differing client implementations. Zipline must be able to deal with these cases in order to continue to follow the chain.

There is no significant difficulty in implementing this feature but it must be done with care to prevent long range fork style attacks using the state patches.

Pending Checkpoint Dependencies

Currently the challenge period is longer than one epoch for the ethereum and gnosis beacon chains. This means that, similar to the Optimism rollup, there will at any point in time be a number of checkpoints that are pending that build on each other. As the finality window moves forward in time these will become finalized. The number of pending checkpoints waiting for finality should be a function of the challenge window and the epoch length.

A successful challenge on a historical checkpoint should invalidate all successive checkpoints that depend on it. This functionality is not currently implemented.

Payload Manipulation

The Zipline challenge method will accept an arbitrary blob of data, check the first 80 bytes for consistency with the challenge and then hash it to be loaded as input to the provable execution. Some care has been taken to ensure that any input data will result in termination of the MIPS verifier however it may be possible that some input is able to result in a non-terminating program. A program that doesn't terminate cannot be proven fraudulent and this a challenger that submits this input will always win regardless of the honest of the other participant.

An audit should take special care to ensure that the program will terminate within some number of instructions regardless of the input.

Handling of chain upgrades

The current implementation of the finality client only supports a single beacon chain fork version (e.g. Altair, Bellatrix) at a time. It does not support moving between fork versions and of course it cannot know what future fork versions will be. Any production deployment of Zipline would need to be upgradable in order to follow upgrades on its origin chain. This property is shared by all light-client based bridges and is a good argument for bridges to be secured by multiple strategies.

Future Work

There are a number of places where the efficiency of the protocol could be improved to both reduce the cost of a challenge game and reduce the length of the trace and thus the time required to run the emulator off-chain.

Slot based Finalization

Currently Zipline is designed to only finalize epoch boundary blocks (EBBs). This was chosen as this is the minimal increment in which the Casper protocol can advance. In reality bridge applications will want to access individual block headers in order to prove transaction inclusion or state.

This is still possible to do with Zipline but it does require consuming code to provide SSZ proofs of ancestry from a finalized block to the block of interest.

A small modification to Zipline would have relayers submitting all intermediate block roots since the last finalized checkpoint in a batch. A watcher would check every root and be able to challenge if any are incorrect. The verifier would just need to accept an additional input which contains the SSZ proofs linking these candidate blocks to the candidate checkpoint.

This modification would shift the ancestry proof checking from the runtime to the provable execution making it much cheaper for consuming applications.

Out-of-MIPS Key Decompression

The beacon state stores the BLS public keys in compressed curve form. That is only the $x$ coordinate of the elliptic curve point is stored plus a bit indicating if the $y$ coordinate is positive or negative.

Compressed keys can be decompressed by computing the $y$ coordinate which requires computing a modulo square root. This decompression operation is by far the most expensive part of the finality client verification.

This expensive operation could be eliminated by computing the y-coordinates natively (e.g. not in the emulator) and providing them as input the the pre-image oracle. A commitment to a SSZ Merkle data structure containing the y-coordinates could be added to the ZiplineInput container. Checking the correctness of a y-coordinate is far cheaper than computing it and only requires a single evaluation of the curve equation.

It is expected that this optimization would significantly reduce the size of the execution trace and the verification time.

Attestation Signature Aggregation (Super Attestations)

Currently all attestations required to prove finality must be submitted on-chain when a call to challenge() is made. This represents significant calldata cost to the challenger. Even though they will recover the cost by winning the challenge game it increases the size of the bond required and increases the amount of block space required to initiate a challenge this making DoS attacks cheaper to conduct.

The beacon chain takes advantage of BLS signature aggregation by aggregating committee signatures using elliptic curve addition. This works because all committee members sign the exact same message \(H\). Each individual signature is given by

\[S_i = x_i \cdot H\]

where \(x_i\) is the ith validator private key. The aggregate is calculated as

\[S = \sum S_i\]

and the aggregate key is

\[X = \sum X_i\]

where \(X_i\) is the ith validator public key.

Verifying requires using the pairing operation to check

\[e(S, G) == e(H, X)\].

This check is done once per committee per epoch. For a finality check this means n_committees * n_epochs attestations must be submitted and double that number of pairing operations are required to check all the attestation signatures.


BLS also allows signature aggregation over heterogeneous messages. For unique messages \(H_j\) the aggregate signature is calculated in the same way as above. The signature verification becomes a check of

\[e(S, G) == \sum_j e\left(H_j, \sum^i_{i \in signer(j)} X\right)\].

This alteration would reduce the number of signatures required to submit in calldate from n_committees * n_epochs to 1. It would also reduce the number of pairing operations from 2 * n_committees * n_epochs to n_committees * n_epochs + 1. This reduction in calldata cost and execution complexity would make this a valuable addition to a production implementation.

Containers

The containers described can be considered as an extension of the Beacon chain specification. Types from the beacon chain spec are used here without redefinition.

Zipline Input

This contains all the data required to prove the candidate checkpoint has been finalized given some trusted checkpoint. This is the input data type for the provable computation.

class ZiplineInput(Container):
    state_root: Root,
    trusted_cp: Checkpoint,
    candidate_cp: Checkpoint,
    patches: List[StatePatch, MAX_PATCHES],
    attestations: List[Attestation, MAX_ATTESTATIONS],
    state_proof: List[Root, 3],

State Patch

A state patch can be applied to a BeaconState to produce a new beacon state. The patch only contains the data relevant to computing the validator shuffling.

class StatePatch(Container):
    epoch: Epoch, # epoch this patches up to. A single patch should only increment the epoch by 1
    activations: List[ValidatorIndex, MAX_ACTIVATIONS],
    exits: List[ValidatorIndex, MAX_EXITS],
    n_deposits_processed: uint32,
    randao_next: Bytes32, # randao value needed to compute the shuffling in the NEXT epoch