-
Notifications
You must be signed in to change notification settings - Fork 11.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sui Node seed peers from validators info not working #20412
Comments
@Duoquote what docs are you looking at for syncing a fullnode? Take a look at https://docs.sui.io/guides/operator/sui-full-node#setting-up-a-full-node for a list of state sync fullnodes you can use as seed peers. We discourage peering directly with validators as a large number of fullnodes paired to a validator can impact performance. It looks like your actual underlying issue is with the archival fallback - can you share your fullnode's config? |
Sorry if I misinterpreted at first, yes I am currently using the peer configuration you mentioned, I put together that list from what I looked up on suiscan. My fullnode is working but I wanted faster and low latency sync so I thought I could use whatever close to my server, turns out it is not working that way. Now to my understanding on what you said, on validator side they host additional nodes and we use them instead of validator node right? What else can I do to improve latency? Also I don't understand how multiple validators work in this scenario, we all sending requests to the same rpc servers, then who is validating it? Is it distributed across all of them or what? |
Yes, the mainnet State Sync Full Nodes (SSFNs) are run by validators. And you connect to the SSFNs instead of directly to the validators to reduce bandwidth constraints on validators.
The improvement offered by pairing directly with a validator is actually quite small (~50ms). The majority of latency has to do with the fact that fullnodes are executing and verifying the checkpoints they receive via state sync. We don't have any existing options for large latency reductions, the general advice is to run with high performance disks and cpu.
Sorry, I don't understand what you're asking |
Thanks for the response, I think I am going to go with as you suggested high performance server and I come accross with custom indexer solution instead of polling local RPC (turns out JSON serialization/deserialization takes quite time), I think it will work better than polling local RPC? Next question was quite unrelated, no need to discuss how validation happens here. Thanks! |
I guess it depends what latencies you're concerned about. If you create a custom indexer you could have the response latencies be much lower than json rpc (as you mentioned serialization/deserialization takes time and it's a more expressive data format). but the sync latencies will be slower. The custom indexer needs to read checkpoints from somewhere, by default its from a s3/gcs bucket which adds latency when reading the checkpoints & waiting for them to be written by the upstream. |
As I can't cut down on network latency that's most I can do I guess.
What do you mean, I enabled this setting on my fullnode to read from my local node; checkpoint-executor-config:
checkpoint-execution-max-concurrency: 200
local-execution-timeout-sec: 30
data-ingestion-dir: /opt/sui/ingest
Does that apply for full node as well, do my full node sync from s3/gcs by default too? I thought p2p config was made to sync checkpoints. |
No full node does not sync from s3/gcs. Sorry, I was just describing the default method the custom indexer reads checkpoints - looks like you're already planning to read them from a local fullnode disk though 👍 , so you wont have the additional latency of gcs/s3 |
Oh, is there a way I can fetch instead of running full node and syncing to disk? Also I tried this after local reader failing; use anyhow::Result;
use async_trait::async_trait;
use sui_data_ingestion_core::{setup_single_workflow, Worker};
use sui_types::full_checkpoint_content::CheckpointData;
struct CustomWorker;
#[async_trait]
impl Worker for CustomWorker {
type Result = ();
async fn process_checkpoint(&self, checkpoint: &CheckpointData) -> Result<()> {
// custom processing logic
// print out the checkpoint number
println!(
"Processing checkpoint: {}",
checkpoint.checkpoint_summary.to_string()
);
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<()> {
let (executor, term_sender) = setup_single_workflow(
CustomWorker,
"https://checkpoints.mainnet.sui.io".to_string(),
83728623, /* initial checkpoint number */
5, /* concurrency */
None, /* extra reader options */
)
.await?;
executor.await?;
Ok(())
} Which I observed it is behind 2 checkpoints compared to my full node, I tried local reader but it gives me Also thank you so much for taking your precious time! |
cc @phoenix-o on the custom indexer |
When I look at to validators on suiscan for example
Mysten 1
: https://suiscan.xyz/mainnet/validator/0x4fffd0005522be4bc029724c7f0f6ed7093a6bf3a09b90e62f61dc15181e1a3e/infoI can see that address is
/dns/mysten-1.mainnet.sui.io/udp/8084
andNetwork Public Key Bytes
is0Mfg9FEcDkWBRgl8KNgKNG0EZvwbUgXJjGyNPCk3dBE=
, when I convert it to bytes and then hex it should beed54176cb93ed30aeaa48b3741d76c754295b76e55a510c5a338d41e37934002
right?But when I try to input that configuration my node just fails to sync, am I converting it wrong? Also how would I find
mel-00.
orewr-00.
?My test configuration that I filled from validator informations:
Am I doing something wrong?
Also where would I find for example
mel-00.
orewr-00.
regional subdomains for such nodes?The text was updated successfully, but these errors were encountered: