-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reth has trouble sending Payload Status to lighthouse #13461
Comments
does this happen consistently or only close after restart? the block execution times are very high, exceeding 2s at some point, which seems odd. what's your CPU and disk? |
It happens consistently, even after performing database compaction as well, though notably less frequently. |
Here's a good sample of the interval I'm seeing it today:
|
Describe the bug
This is a companion issue to the one reported here: sigp/lighthouse#6734
Either Lighthouse or Reth are struggling to communicate payload status with one another fairly regularly. I'm not certain if it is every block, but it happens frequently enough so that I can see it every epoch or so. I'm running everything on the same machine, so connectivity isn't the problem.
Steps to reproduce
Rust:
Lighthouse:
Reth:
Node logs
pre-database compaction
post-database compaction normal logs
post-database compaction error logs
Platform(s)
Linux (x86)
Container Type
Not running in a container
What version/commit are you on?
reth Version: 1.1.4-dev
Commit SHA: 058cfe2
Build Timestamp: 2024-12-19T15:21:06.089859378Z
Build Features: asm_keccak,jemalloc
Build Profile: maxperf
What database version are you on?
Current database version: 2
Local database version: 2
Which chain / network are you on?
mainnet
What type of node are you running?
Full via --full flag
What prune config do you use, if any?
block_interval = 5
If you've built Reth from source, provide the full command you used
RUSTFLAGS="-C target-cpu=native" cargo build --profile maxperf --features jemalloc,asm-keccak
Code of Conduct
The text was updated successfully, but these errors were encountered: