Retrospective Analysis of the Lodestar User Incentive Program

Retrospective Analysis of the Lodestar User Incentive Program

Authored by Phil Ngo

On March 23, 2023, we announced a program to experiment with out-of-protocol incentives for solo stakers on Ethereum to switch to our consensus client, Lodestar.

We did this because we wanted to add an additional incentive (beyond altruism) for independent node operators to consider trying Lodestar as part of their setup.

Solo stakers are some of the most important node operators in the Ethereum ecosystem but suffer from a natural lack of in-protocol economic incentives to switch to clients of lower usage.

Lodestar proposed its first mainnet block in November 2021 but only encompasses about 1–2% of the current share of stakers. Notwithstanding the risk of running majority clients, there's not a lot of motivation for node operators to tweak their setup for running minority clients.

This retrospective is to outline the method and execution of this program with lessons learned to perhaps entice others to run similar programs for diversifying client usage on Ethereum.


The method

It was important for us to have some goals in mind coming into this experiment so we could target the right people with the right incentives. Our team came to the conclusion that we needed to:

  • Exclude larger node operators from this program. They do not need additional economic staking incentives to run Lodestar comparatively to solo stakers.

  • Make it as simple as possible for node operators to switch over to Lodestar.

  • Minimize game theoretical exploits to this program (e.g., maximizing prize with Sybil attacks).

  • Minimize the risk of unequally rewarding one node operator over another.

Minimizing risk

By including eligibility rules that ensure the burden was high for node operators to have unfair advantages over others, we did our best to ensure legitimate Lodestar users had a good chance of proposing an eligible block (more retrospective on the "eligible period" further below).

Naturally, though, operators with more stake had a higher chance of getting an eligible block included. This is why we included a verification that deposit addresses did not link to beacon chain deposits greater than a 320 ETH cumulative limit. This limit was arbitrary, though, and was not based on any data of what amount constitutes a "whale."

Additionally, by checking the deposit address, fee recipient address, and Rocketpool node data (e.g., withdrawal address), we ensured that the burden of gaming the system was high enough to prevent operators from splitting their stake for more validators via Rocketpool. Especially without a clear indication of what the winning prize amount was for each node operator.

As an example, if a solo staker wanted to withdraw their one validator of 32 ETH to create three 8 ETH mini-pool validators (8 ETH + RPL bond) for Rocketpool for a better chance at proposing a block, the burden was high enough to disincentivize this.

Designing the prize pool

One of the keys of this program was how to design the prize pool. We determined that a prize pool split equally by all winners was the fairest option. Although the winning prize amount for each individual operator is unknown until the program ends, it would allow all eligible node operators to win an equal prize regardless of when their block was proposed in the eligible period. Since we cannot control when a validator is selected for a block proposal, this ensures maximum inclusion.

The downside to this model is that it doesn't incentivize all node operators to participate because the winning amount is unknown, nor does it publicize this program to others. The more eligible node operators there are, the smaller the prize for every winning operator.

As the program directors, a larger marketing effort on our part throughout the eligibility period would have likely netted more participation since others are not incentivized to publicize the program.

The execution

There were some lessons learned from executing this program that we would take into account for a future iteration or for others to replicate a similar program for client diversification.

Eligibility period

We underestimated the amount of growth in the validator set over the period of the program. As the validator set grows larger, the likelihood of your validator proposing a block is generally lower.

We based our eligibility period by figuring out the average time in months it would take for one validator to propose a block (number of validators/block proposals per month), then multiplying it by two. At 550,000 validators on mainnet when the program began, the approximate eligibility period was five months.

During the period of the program, the validator set size grew to over 800,000. This, unfortunately, meant that some node operators running Lodestar never got a chance to propose during the eligibility period, as it became harder over time.

One solo node operator even came into our Discord and asked us to extend the eligible period by one day so their block could be included! It's hard to predict the growth of the validator set over time. However, there should have been an estimation of the validator set growth baked into the eligibility period.

Verifying block proposal integrity

Knowing that it could be as simple as including Lodestar in your graffiti was enough to potentially game the system, we made sure to have a method of verification to eliminate blocks that were likely not proposed by the Lodestar client.

These tools, such as Blockprint, are not perfect and only provide us with probabilities, especially when it comes to minority clients. Though the method of fingerprinting is still useful, we were able to use our own internal script to analyze some uniqueness with our client, such as attestation packing, to determine if it was likely a Lodestar proposed block.

Some blocks from validators with the correct graffiti were eliminated by this method, and some were obviously just graffiti changes. Those cheaters were caught, but if it came down to a dispute, there are no current methods available to 100% determine that a block was actually proposed by a specific client.

Scalable correlation of data based on eligibility rules

Going through a bunch of data manually can be burdensome and increases the risk of human mistakes. We manually analyzed ~580 potentially eligible blocks after eliminating 11,400 blocks with eligible graffiti during the eligibility period.

Thanks to data collected and retained by data providers such as Miga Labs, we were able to eliminate large node operators and whale sets from the list. Even though we could get data from APIs to determine if potential blocks were from validators with duplicate fee recipients or deposit addresses, it requires manual human verification for other eligibility rules, such as our "whale limit." We also had to correlate information from RocketScan for Rocketpool node operators and make comparisons there to remove duplicate blocks from the same node operator.

Manual human verifications can be prone to human errors. As an example, the final list of winners accidentally included two duplicates from the same Rocketpool node operator, which bypassed our oversight. The actual number of winners was 85, not 87, when the duplicates were removed.

This is not a scalable method, and we were lucky to only have about 580 blocks to manually verify. There are likely ways to automate some of the manual work, but it would require coordination and cooperation from some additional data providers like BeaconScan and RocketScan to maximize automation.

Transparent release of eliminated blocks

Some node operators were unsure why their validators did not show up on the winning list. As long as one of your validators made the list, you qualified for a share of the prize. However, it was not clear why some of the other validators/blocks were eliminated.

It makes sense in the future to show all of the blocks proposed by solo validators and provide a reason for why it was eliminated, even if it's just a simple duplicate from the same node operator. This is especially true if none of your validators qualified because of a very specific rule in the program that disqualified you.

Conclusion

Overall, the program was a success in promoting our minority client and having real users contributing to the usage and development of Lodestar. It incentivized our team to focus on ensuring we had smooth onramps for node operators of all levels and integrate ourselves into some of the most used community tools for easy adoption (e.g., Rocketpool and Dappnode).

For the Ethereum community, this is one of many ways we can figure out how to solve important issues in our ecosystem. Minority clients have always been an important part of the decentralization and resilience of Ethereum. Likewise, we need to continually enforce our community ethos by incentivizing the correct behaviours from all stakers who risk their ETH to secure the network.

This program shows that not every solution is a protocol solution and that we are only limited to our imaginations. What we need to figure out is how to make these types of programs scalable and viable long term. The success of Ethereum depends on it!

Have additional questions or a comment? Hop into our 👉 Discord.

About ChainSafe

ChainSafe is a leading blockchain research and development firm specializing in infrastructure solutions for web3. Alongside its contributions to major ecosystems such as Ethereum, Polkadot, Filecoin, and more, ChainSafe creates solutions for developers and teams across the web3 space utilizing gaming, interoperability, and decentralized storage expertise.

As part of its mission to build innovative products for users and improved tooling for developers, ChainSafe embodies an open source and community-oriented ethos to advance the future of the internet.

Website | Twitter | Linkedin | GitHub | Discord | YouTube | Newsletter