This Week in Lotus - Week 17 - 2023

Posted April 28, 2023 by  ‐ 2 min read

⚡️ nv19 on mainnet:

The mainnet upgraded successfully to nv19 yesterday. After the market cron system had rescheduled all the deal processing tasks 30 days ahead as part of its regular operations, we are now seeing the first fast cron and block validation times ⚡️.

lotusdaemon.log | grep -oP '"vmCron": \K[0-9.]+(?=,)'
-------------- Old time ↓
13.860551815
10.730580818
10.904326853
12.830480138
--------------- New time ↓
0.768240341
0.988382018
0.864108236
0.922351756

Some additional good news is that catching up sync from a snapshot should be significantly faster again very soon! When the snapshot file shows: xxxxxxxx_2023_04_28T13_00_00Z.car or later, catching up sync from that snapshot will be fast!

Spammy rate limit exceeded logs?

Have you seen this log in your Lotus node:

{"level":"warn","ts":"2023-04-27T07:58:45.064-0700","logger":"sub","caller":"sub/incoming.go:588","msg":"ignoring indexer message","sender":"12D3KooWLDf6KCzeMv16qPRaJsTLKJ5fR523h65iaYSRNfrQy7eU","err":"rate limit exceeded"}

It is not harmful and does not indicate any syncing issues! The error comes from Boost processes that generate indexing announcements, and send it over gossipsub. Lotus nodes act as validators for these announcements as it progresses over gossipsub. When Lotus sees multiple announcements from the same source (Boost), it starts to rate-limit them and logs this message. This erroneous indexing announcement has been fixed in the newer Boost version:

  • SPs running Boost v1.7.0 - Please update your Boost process to v1.7.2

In the meantime you can suppress these logs with lotus log set-level --system sub error on the Lotus side.

◀️ Resetting previously set values and scripts:

Now that we are past pain of cron times pre nv19, we would like to remind everyone to reset a couple of previously mentioned environment variable and scripts that has been floating around in channels:

  • Some SPs lowered their PropagationDelay to get more headroom for block production.
    • Now that we are back to healthy block validation times, setting this to the default is advised to ensure that you get all messages propagated to your node.
  • SPs pruning their mpool:
    • While this was a nice workaround to make syncing more gentle. That problem should no longer be a concern due to significantly faster block conmputation times. The best for the network as a whole, and your operations are to stop pruning the mpool and include as many messages in your blocks to bring the chain bandwidth ⬆️ and BaseFee ⬇️.

That’s it for the week! Have a great weekend! ☀️