Skip to content

Conversation

@MCozhusheck
Copy link
Contributor

@MCozhusheck MCozhusheck commented Sep 5, 2025

Description

Implements: #7470

Change getting seed peers from DNS TXT record to HTTP endpoint. DNS TXT records might be limited by it's size and in a case of larger list of seed peers this might be a problem.

The flow of getting peers is:

  • Query DNS seed address to get download url
  • Downlaod both file with seed peers and signature
  • Verify Signature
  • If any of the above methods failed - fallback to previous DNS TXT record method.
  • Parse and add these peers to database.

To finish this feature few more changes are required from the infrastructure side:

  • Update DNS TXT records for rest of the networks - for now it's only changed for esmeralda
  • Come up with some way of storing private key for seed nodes file signature
  • Upload other networks seed peers files as for now only esmeralda have them uploaded:

seed nodes:
https://cdn-universe.tari.com/tari-project/tari/esmeralda/seednodes.json

signature
https://cdn-universe.tari.com/tari-project/tari/esmeralda/seednodes.json.asc

How Has This Been Tested?

Build minotari_node and attempted to connect with other peers.

Check signature_verifier logs in network.log to confirm that node did indeed select correct download URL, verified signature and obtained seed peers.

Check bothed on esmeralda and mainnet networks. In a case of esme this should be a happy path (as described above) but in a case of mainnet it should fallback to previous DNS TXT record method.

Breaking Changes

  • None
  • Requires data directory on base node to be deleted
  • Requires hard fork
  • Other - Please specify

Summary by CodeRabbit

  • New Features
    • Secure verification of signed update and seed files; automatic download and validation of seed peer lists via URLs discovered from DNS TXT records.
    • Added clearer error when no download URL is found and a configurable default download URL for peer seeds.
  • Refactor
    • Consolidated signature verification into a dedicated subsystem.
  • Tests
    • Added tests for JSON seed parsing, seed-to-peer conversion, and signature verification flows.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 5, 2025

Walkthrough

Introduces a new signature_verification module and PGP-based verification flow; refactors auto-update and P2P DNS seed initialization to download signed JSON via DNS TXT-discovered URLs, verify signatures, and parse peers; adds config download_url, updates dependencies (pgp/reqwest required, adds serde_json), and removes legacy signature code.

Changes

Cohort / File(s) Summary
Dependencies
base_layer/p2p/Cargo.toml
Make pgp and reqwest non-optional; add serde_json; simplify auto-update feature to only semver.
Auto-update refactor
base_layer/p2p/src/auto_update/mod.rs, base_layer/p2p/src/auto_update/error.rs, base_layer/p2p/src/auto_update/signature.rs (removed)
Replace local PGP signature logic with signature_verification APIs; adapt error variants (use SignatureVerificationError) and change verification flow to verify_signed_hashes; remove legacy verifier/tests file.
Signature verification subsystem
base_layer/p2p/src/signature_verification/mod.rs, .../error.rs, .../verifier.rs, .../gpg_keys/seed_peers_http.asc
Add new module that embeds maintainer keys, downloads files/signatures, verifies PGP signatures, exposes helpers (download/verify), and centralizes signature-related errors.
DNS/seed resolution flow
base_layer/p2p/src/initialization.rs, base_layer/p2p/src/peer_seeds.rs, base_layer/p2p/src/dns/error.rs
Add resolve_download_url (TXT → https URL); implement HTTP-first seed resolution with per-seed timeouts, download + signature verification of JSON seed files, skip invalid entries; add DnsClientError::NoDownloadUrlFound.
Configuration
base_layer/p2p/src/config.rs, common/config/presets/b_peer_seeds.toml
Add download_url: String to PeerSeedsConfig (#[serde(default)]) and populate default CDN URL in Default impl and preset.
Public module export
base_layer/p2p/src/lib.rs
Export pub mod signature_verification;.
Minor lint
comms/core/src/builder/placeholder.rs
Add #[allow(dead_code)] to PlaceholderService.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Init as P2pInitializer
  participant DNS as DnsSeedResolver
  participant HTTP as HTTP Client
  participant Sig as signature_verification

  Note over Init,DNS: Resolve seed -> download signed JSON -> verify -> parse peers
  Init->>DNS: resolve_download_url(addr)
  DNS-->>Init: https://... or NoDownloadUrlFound

  par download files
    Init->>HTTP: GET seednodes.json
    HTTP-->>Init: JSON content
  and
    Init->>HTTP: GET seednodes.json.sig
    HTTP-->>Init: StandaloneSignature
  end

  Init->>Sig: verify_file_signature(file_url, sig_url) / verify_signed_hashes(...)
  alt verified
    Sig-->>Init: verified (signer / hash match)
    Init->>Init: parse JSON -> SeedPeer -> Peer
  else failed
    Sig-->>Init: SignatureVerificationError
    Init-->>Init: log & skip
  end
Loading
sequenceDiagram
  autonumber
  participant AU as AutoUpdate
  participant HTTP as HTTP Client
  participant Sig as signature_verification

  Note over AU,Sig: Auto-update now uses verify_signed_hashes
  AU->>HTTP: GET hashes (text) and GET hashes.sig
  HTTP-->>AU: hashes text
  HTTP-->>AU: signature
  AU->>Sig: verify_signed_hashes(signature, hashes, update_hash)
  alt match & verified
    Sig-->>AU: (hash_bytes, filename)
    AU-->>AU: construct download URL -> Ok(Some(SoftwareUpdate))
  else no match / verification failed
    Sig-->>AU: SignatureVerificationError
    AU-->>AU: Ok(None)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • SWvheerden

Poem

A rabbit taps a PGP key,
Nose-twitch checks the signature free.
From DNS a URL hops in view,
Signed JSON spills trusted pew.
Hops, sniffs, and verifies—hooray! 🐰🔐

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@MCozhusheck
Copy link
Contributor Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 5, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (20)
comms/core/src/builder/placeholder.rs (1)

32-33: Scope and document the lint suppression; consider reducing visibility if not API

  • If toolchain allows, prefer #[expect(dead_code, reason = "...")] to make intent explicit; otherwise add a brief comment.
  • If this type isn’t part of the public API, consider pub(crate).
-#[allow(dead_code)]
+#[expect(dead_code, reason = "Type-level placeholder used only in generic signatures; never constructed at runtime")]
+// If your rustc does not support `#[expect]`, fall back to:
+// #[allow(dead_code)] // Type-level placeholder used only in generic signatures
-pub struct PlaceholderService<TReq, TResp, TErr>(PhantomData<(TReq, TResp, TErr)>);
+pub(crate) struct PlaceholderService<TReq, TResp, TErr>(PhantomData<(TReq, TResp, TErr)>);
base_layer/p2p/src/dns/error.rs (1)

41-42: Add domain context to NoDownloadUrlFound error variant
NoDownloadUrlFound currently omits which seed/domain failed. Change its definition to NoDownloadUrlFound(String) and update the call in peer_seeds.rs:102 to pass the queried name for clearer diagnostics.

base_layer/p2p/src/signature_verification/error.rs (1)

25-35: Extend error surface for network hardening (size caps, scheme).

Add explicit variants so callers can distinguish policy violations (e.g., response too large, non-HTTPS) from transport errors. This enables clearer handling and metrics.

 #[derive(Debug, Error)]
 pub enum SignatureVerificationError {
     #[error("Failed to download file: {0}")]
     DownloadError(#[from] reqwest::Error),
+    #[error("Response too large")]
+    ResponseTooLarge,
+    #[error("Unsupported URL scheme")]
+    UnsupportedScheme,
     #[error("Failed to verify signature: {0}")]
     SignatureError(#[from] pgp::errors::Error),
     #[error("Signature verification failed")]
     VerificationFailed,
     #[error("Invalid hash format")]
     InvalidHashFormat,
 }
base_layer/p2p/Cargo.toml (1)

29-29: Loosen or unify serde_json version to avoid duplicates.

1.0.51 is quite old; consider 1.0 or a workspace-managed version to reduce lockfile churn.

-serde_json = "1.0.51"
+serde_json = "1.0"
base_layer/p2p/src/config.rs (1)

70-72: Document the new config field

Please document the intent and security expectation (HTTPS-only) so operators understand when it’s used vs. DNS TXT.

Apply this diff:

-    #[serde(default)]
-    pub download_url: String,
+    /// Default HTTPS URL to fetch the signed seed peers JSON if DNS TXT does not provide one.
+    /// Expected to be HTTPS.
+    #[serde(default)]
+    pub download_url: String,
base_layer/p2p/src/auto_update/mod.rs (1)

123-141: Log verification failures and warn on non-HTTPS update endpoints

Returning Ok(None) silently hides the reason; add a warn log. Also warn if any configured update URL is non-HTTPS.

Apply this diff:

             let (hashes, sig) = future::join(
                 signature_verification::download_hashes_file(&hashes_url),
                 signature_verification::download_hashes_sig_file(&hashes_sig_url),
             )
             .await;
             let hashes = hashes?;
             let sig = sig?;
             let verifier = SignedMessageVerifier::new(signature_verification::maintainers().collect());
+            if !(hashes_url.starts_with("https://")
+                && hashes_sig_url.starts_with("https://")
+                && download_base_url.starts_with("https://"))
+            {
+                log::warn!(
+                    target: LOG_TARGET,
+                    "Non-HTTPS auto-update URL(s) detected (hashes_url: {hashes_url}, hashes_sig_url: {hashes_sig_url}, download_base_url: {download_base_url})."
+                );
+            }
             match verifier.verify_signed_hashes(&sig, &hashes, &update_spec.hash) {
                 Ok((_, filename)) => {
                     let download_url = format!("{download_base_url}/{filename}");
                     log::info!(target: LOG_TARGET, "Valid update found at {download_url}");
                     Ok(Some(SoftwareUpdate {
                         spec: update_spec,
                         download_url,
                     }))
                 },
-                Err(_) => Ok(None),
+                Err(err) => {
+                    log::warn!(target: LOG_TARGET, "Signature verification failed for update {update_spec}: {err}");
+                    Ok(None)
+                },
             }

Optional: use try_join! if you prefer early error propagation on the first failing download; join is fine if you want both attempts regardless.

base_layer/p2p/src/initialization.rs (4)

104-120: Move example block to module docs or remove

The inline commented example adds noise in a hot-path file. Consider moving it to the signature_verification module docs or leaving a brief pointer.

-// Example usage of signature verification module:
-// ...
+// See p2p::signature_verification module docs for usage examples.

477-489: Expose timeout via config and log the resolved URL as well

Hardcoding 5s may be too strict/lenient per environment. Also log the resolved download URL for easier troubleshooting.

-async fn get_url_from_dns(resolver: &mut DnsSeedResolver, addr: &str) -> Result<(String, String), DnsClientError> {
+async fn get_url_from_dns(
+    resolver: &mut DnsSeedResolver,
+    addr: &str,
+) -> Result<(String, String), DnsClientError> {
     let timer = Instant::now();
-    let download_url_res = match timeout(Duration::from_secs(5), resolver.resolve_download_url(addr)).await {
+    let download_url_res = match timeout(Duration::from_secs(5), resolver.resolve_download_url(addr)).await {
         Ok(res) => res,
         Err(_) => {
             warn!(target: LOG_TARGET, "Timeout resolving DNS download URL `{addr}`");
             Err(DnsClientError::Timeout)
         },
     }?;
-    let res = (download_url_res, addr.to_string());
-    info!(target: LOG_TARGET, "Resolved DNS download URL `{}` in {:.0?}", addr, timer.elapsed());
+    let res = (download_url_res.clone(), addr.to_string());
+    info!(
+        target: LOG_TARGET,
+        "Resolved DNS `{}` to download URL `{}` in {:.0?}",
+        addr, download_url_res, timer.elapsed()
+    );
     Ok(res)
 }

585-602: Bound concurrency or add per-task timeout to avoid startup stalls

If many DNS seeds are configured or a CDN misbehaves, unbounded join_all may stall startup. Add a per-future timeout or a small concurrency limit (e.g., buffer_unordered(4)).

-    let all_seed_peers: Vec<SeedPeer> = future::join_all(downloading)
+    use futures::StreamExt;
+    let all_seed_peers: Vec<SeedPeer> = futures::stream::iter(downloading)
+        .buffer_unordered(4)
         .await
         .into_iter()
         .filter_map(|result| match result {
             Ok(peers) => Some(peers),
             Err(e) => {
                 warn!(target: LOG_TARGET, "Failed to download/verify seed peers: {}", e);
                 None
             },
         })
         .flatten()
         .collect();

603-612: Optional: dedupe seed peers before insertion

If JSON contains duplicates, add_or_update_peer will repeat work. Consider deduping by public key/address.

-    let peers: Vec<Peer> = all_seed_peers.into_iter().map(Peer::from).collect();
+    use std::collections::HashSet;
+    let mut seen = HashSet::new();
+    let peers: Vec<Peer> = all_seed_peers
+        .into_iter()
+        .filter(|sp| seen.insert((sp.public_key.clone(), sp.addresses.clone())))
+        .map(Peer::from)
+        .collect();
base_layer/p2p/src/signature_verification/mod.rs (3)

37-49: Parse maintainer keys once

Avoid re-parsing keys on every verification by caching parsed keys.

-/// Returns an iterator over all configured maintainer public keys
-pub fn maintainers() -> impl Iterator<Item = SignedPublicKey> {
-    MAINTAINERS.iter().map(|s| {
-        let (pk, _) = SignedPublicKey::from_string(s).expect("Malformed maintainer PGP signature");
-        pk
-    })
-}
+use once_cell::sync::Lazy;
+static MAINTAINER_KEYS: Lazy<Vec<SignedPublicKey>> = Lazy::new(|| {
+    MAINTAINERS
+        .iter()
+        .map(|s| SignedPublicKey::from_string(s).expect("Malformed maintainer PGP signature").0)
+        .collect()
+});
+
+/// Returns an iterator over all configured maintainer public keys
+pub fn maintainers() -> impl Iterator<Item = SignedPublicKey> {
+    MAINTAINER_KEYS.clone().into_iter()
+}

115-131: Return early on verify failure without logging twice

You already log in the error branch; returning Err(e) is fine. Consider trimming duplicate logs elsewhere to reduce noise.

-        Err(e) => {
-            warn!(target: LOG_TARGET, "Signature verification failed for {}: {}", file_url, e);
-            Err(e)
-        },
+        Err(e) => Err(e),

133-138: Optional: enforce a conservative max size for downloaded text

Large responses (even if signed) can stress memory. Consider rejecting responses over, say, 1–2 MB via content_length() check before reading.

 pub async fn download_file<T: IntoUrl>(url: T) -> Result<String, SignatureVerificationError> {
-    let resp = http_download(url).await?;
+    let resp = http_download(url).await?;
+    if let Some(len) = resp.content_length() {
+        if len > 2 * 1024 * 1024 {
+            // Consider adding a specific error variant for this case
+            return Err(SignatureVerificationError::NetworkError(
+                reqwest::Error::new(reqwest::StatusCode::PAYLOAD_TOO_LARGE, "response too large"),
+            ));
+        }
+    }
     let txt = resp.text().await?;
     Ok(txt)
 }

If SignatureVerificationError lacks a suitable variant, add one (e.g., ResponseTooLarge(u64)).

base_layer/p2p/src/signature_verification/verifier.rs (7)

41-51: Downgrade per-key verification failures to trace to avoid noisy warnings.

verify_signature logs warn! for every non-matching maintainer key, which will emit (N-1) warnings even on a successful verification. Use trace! (or remove the per-key failure log) and keep a single warning at the call site when no key matches.

- use log::{debug, warn};
+ use log::{debug, trace, warn};
@@
     pub fn verify_signature(&self, signature: &StandaloneSignature, message: &str) -> Option<&SignedPublicKey> {
         self.maintainers.iter().find(|pk| {
             let result = signature.verify(pk, message.as_bytes()).is_ok();
             if result {
                 debug!(target: LOG_TARGET, "Signature verified successfully with key: {:?}", pk.fingerprint());
             } else {
-                warn!(target: LOG_TARGET, "Signature verification failed with key: {:?}", pk.fingerprint());
+                trace!(target: LOG_TARGET, "Signature verification failed with key: {:?}", pk.fingerprint());
             }
             result
         })
     }

Also applies to: 23-25


40-51: Generalize inputs to bytes to support non-UTF8 payloads.

Signatures are over bytes, not strings. Accept impl AsRef<[u8]> to avoid accidental lossy conversions and broaden applicability.

-    pub fn verify_signature(&self, signature: &StandaloneSignature, message: &str) -> Option<&SignedPublicKey> {
-        self.maintainers.iter().find(|pk| {
-            let result = signature.verify(pk, message.as_bytes()).is_ok();
+    pub fn verify_signature(
+        &self,
+        signature: &StandaloneSignature,
+        message: impl AsRef<[u8]>,
+    ) -> Option<&SignedPublicKey> {
+        let msg = message.as_ref();
+        self.maintainers.iter().find(|pk| {
+            let result = signature.verify(pk, msg).is_ok();
             if result {
                 debug!(target: LOG_TARGET, "Signature verified successfully with key: {:?}", pk.fingerprint());
             } else {
                 trace!(target: LOG_TARGET, "Signature verification failed with key: {:?}", pk.fingerprint());
             }
             result
         })
     }
@@
-    pub fn verify_file_signature(
+    pub fn verify_file_signature(
         &self,
-        signature: &StandaloneSignature,
-        file_content: &str,
+        signature: &StandaloneSignature,
+        file_content: impl AsRef<[u8]>,
     ) -> Result<&SignedPublicKey, SignatureVerificationError> {
-        self.verify_signature(signature, file_content).ok_or_else(|| {
+        self.verify_signature(signature, file_content).ok_or_else(|| {
             warn!(target: LOG_TARGET, "File signature verification failed - no matching maintainer key found");
             SignatureVerificationError::VerificationFailed
         })
     }

Also applies to: 55-64


79-87: Make hash parsing more robust and avoid full collection; also compare slices explicitly.

  • Handle tabs/multiple spaces and ignore blank/comment lines.
  • Stream the search (no intermediate Vec).
  • Prefer hash.as_slice() == target_hash for clarity.
  • Consider a dedicated error (e.g., HashNotFound) for the final Err to distinguish from format errors. If adding a new variant isn’t feasible now, keep current behavior.
-        let parsed_hashes: Vec<(Vec<u8>, String)> = hashes
-            .lines()
-            .filter_map(|line| {
-                let mut parts = line.splitn(2, ' ');
-                let hash = parts.next().map(|s| s.trim()).map(from_hex)?.ok()?;
-                let filename = parts.next()?;
-                Some((hash, filename.trim().to_string()))
-            })
-            .collect();
-
-        parsed_hashes
-            .into_iter()
-            .find(|(hash, filename)| {
-                let matches = *hash == target_hash;
-                if matches {
-                    debug!(target: LOG_TARGET, "Found matching hash for file: {}", filename);
-                }
-                matches
-            })
-            .ok_or_else(|| {
-                warn!(target: LOG_TARGET, "No matching hash found in the signed hashes file");
-                SignatureVerificationError::InvalidHashFormat
-            })
+        for (lineno, raw_line) in hashes.lines().enumerate() {
+            let line = raw_line.trim();
+            if line.is_empty() || line.starts_with('#') {
+                continue;
+            }
+            // Find first unicode whitespace boundary to preserve filenames with spaces
+            let split_at = match line.find(char::is_whitespace) {
+                Some(i) => i,
+                None => continue, // skip unparseable lines
+            };
+            let (hash_hex, rest) = line.split_at(split_at);
+            let filename = rest.trim();
+            if filename.is_empty() {
+                continue;
+            }
+            let Ok(hash) = from_hex(hash_hex) else { continue };
+            if hash.as_slice() == target_hash {
+                debug!(target: LOG_TARGET, "Found matching hash for file: {}", filename);
+                return Ok((hash, filename.to_string()));
+            }
+            // else continue scanning
+        }
+        warn!(target: LOG_TARGET, "No matching hash found in the signed hashes file");
+        Err(SignatureVerificationError::InvalidHashFormat)

Also applies to: 89-101, 91-97


150-173: Remove noisy test prints; keep assertions only.

These printlns add CI noise and don’t aid failure diagnostics beyond the assertions.

-        // Debug: Print key fingerprint
-        println!("Key fingerprint: {:?}", key.fingerprint());
@@
-        // Debug: Try to understand what's happening
-        println!("Attempting to verify signature...");
-        println!("Content length: {}", SEEDNODES_JSON.len());
-        println!("First 50 chars: {:?}", &SEEDNODES_JSON[..50]);
@@
-        // Try verification with debug info
-        let verify_result = sig.verify(&key, SEEDNODES_JSON.as_bytes());
-        println!("Direct pgp verification result: {:?}", verify_result);
-
-        if let Err(e) = &verify_result {
-            println!("Verification error details: {:?}", e);
-        }
+        // Optional direct verification can be kept locally when debugging:
+        // let _ = sig.verify(&key, SEEDNODES_JSON.as_bytes());

381-432: Mark the debug-only test as ignored to avoid noisy output in CI.

This test has no assertions and prints a lot. Keep it for local debugging but skip in CI.

-    #[test]
+    #[test]
+    #[ignore = "Debug-only; produces verbose output. Enable locally when debugging signature parsing."]
     fn test_debug_signature_parsing() {

66-78: Consider returning the signer for verify_signed_hashes as well.

In some workflows, knowing which maintainer signed the hashes is useful for telemetry/auditing. Optionally return (&SignedPublicKey, Vec<u8>, String) or a struct. Non-blocking.


111-127: Risk: pinning to “current live” artifacts makes tests brittle.

If the upstream rotates keys or replaces the seed file, these fixtures get stale. Keep them, but reword comments to “captured fixture” and consider adding a local .asc/.json pair to avoid future confusion.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 36fd93f and ab0d553.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • base_layer/p2p/Cargo.toml (2 hunks)
  • base_layer/p2p/src/auto_update/error.rs (1 hunks)
  • base_layer/p2p/src/auto_update/mod.rs (2 hunks)
  • base_layer/p2p/src/auto_update/signature.rs (0 hunks)
  • base_layer/p2p/src/config.rs (2 hunks)
  • base_layer/p2p/src/dns/error.rs (1 hunks)
  • base_layer/p2p/src/initialization.rs (5 hunks)
  • base_layer/p2p/src/lib.rs (1 hunks)
  • base_layer/p2p/src/peer_seeds.rs (1 hunks)
  • base_layer/p2p/src/signature_verification/error.rs (1 hunks)
  • base_layer/p2p/src/signature_verification/gpg_keys/seed_peers_http.asc (1 hunks)
  • base_layer/p2p/src/signature_verification/mod.rs (1 hunks)
  • base_layer/p2p/src/signature_verification/verifier.rs (1 hunks)
  • comms/core/src/builder/placeholder.rs (1 hunks)
💤 Files with no reviewable changes (1)
  • base_layer/p2p/src/auto_update/signature.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: ci
  • GitHub Check: test (testnet, esmeralda)
  • GitHub Check: test (mainnet, stagenet)
  • GitHub Check: test (nextnet, nextnet)
  • GitHub Check: cargo check with stable
  • GitHub Check: ledger build tests
  • GitHub Check: wasm build tests
🔇 Additional comments (11)
base_layer/p2p/Cargo.toml (1)

56-56: Feature surface change acknowledged.

auto-update no longer drags in reqwest/pgp, which are now unconditional. Looks fine given seeds flow depends on them.

base_layer/p2p/src/config.rs (1)

103-106: Minor readability: bind network once
No behavior change; download_url is only used as a DNS fallback. Apply:

-            download_url: format!(
-                "https://cdn-universe.tari.com/tari-project/tari/{}/seednodes.json",
-                Network::get_current_or_user_setting_or_default().as_key_str()
-            ),
+            download_url: {
+                let network = Network::get_current_or_user_setting_or_default().as_key_str();
+                format!("https://cdn-universe.tari.com/tari-project/tari/{network}/seednodes.json")
+            },

Optional: add a config test to assert the default download_url for the current network.

base_layer/p2p/src/auto_update/mod.rs (1)

53-56: Good modularization

Switching to the shared signature_verification module improves reuse and keeps verification logic in one place.

base_layer/p2p/src/auto_update/error.rs (2)

25-33: Error surface aligned with new signature verification module — looks good

AutoUpdateError cleanly wraps DnsClientError and SignatureVerificationError. The messages are clear and consistent.


25-33: No remaining references to removed AutoUpdateError variants
No matches found for AutoUpdateError::DownloadError or AutoUpdateError::SignatureError, nor for map_err(AutoUpdateError::DownloadError)/map_err(AutoUpdateError::SignatureError). Old variants have been fully removed.

base_layer/p2p/src/initialization.rs (2)

717-878: Tests are clear and valuable

JSON parsing and SeedPeer conversion tests cover typical, invalid, and empty cases. The PGP public key parse sanity check is a good addition.


491-540: Add timeout to avoid hangs and verify error conversion

  • Wrap the verify_signed_file(&url, …) call in tokio::time::timeout(Duration::from_secs(15), …) to fail fast on network or verification stalls.
  • Confirm that ServiceInitializationError: From<anyhow::Error> is implemented (required by the ? on anyhow!()); if it isn’t, either add that impl or change this function to return anyhow::Result<Vec<SeedPeer>>.
base_layer/p2p/src/signature_verification/mod.rs (2)

141-155: Good parsing and logging for signature downloads

Clear success/failure logs and error mapping to SignatureVerificationError.


157-165: Unit test is minimal but effective

Counts iteration forces key parsing, catching malformed bundled keys early.

base_layer/p2p/src/signature_verification/verifier.rs (2)

53-64: API shape and error mapping look good.

Clean separation: this helper elevates Option to Result with a single failure path. Works well with the proposed log-level change.


128-149: Nice coverage with real-world vectors.

Positive/negative cases and multi-maintainer selection look solid.

Also applies to: 188-201, 268-281, 305-378

prost = "0.13.3"
rand = "0.8"
reqwest = { version = "0.11", optional = true, default-features = false }
reqwest = { version = "0.11" }
Copy link
Contributor

@coderabbitai coderabbitai bot Sep 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Avoid pulling in OpenSSL; pin reqwest to rustls and disable defaults.

Dropping default-features = false likely re-enables native-tls. This can break cross-compiles and inflate the binary. Prefer rustls and explicitly set features.

-reqwest = { version = "0.11" }
+reqwest = { version = "0.11", default-features = false, features = ["rustls-tls", "gzip", "brotli"] }

🏁 Script executed:

#!/bin/bash
# Inspect reqwest feature resolution
rg -n 'reqwest\s*=' Cargo.toml
# If Cargo.lock exists, confirm tls backend
rg -n 'reqwest' Cargo.lock -A2 -B2 || true

Length of output: 979


Pin reqwest to rustls and disable default OpenSSL

In base_layer/p2p/Cargo.toml, replace:

- reqwest = { version = "0.11" }
+ reqwest = { version = "0.11", default-features = false, features = ["rustls-tls", "gzip", "brotli"] }

This ensures native-tls (OpenSSL) is disabled by default and uses rustls for TLS.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
reqwest = { version = "0.11" }
# In base_layer/p2p/Cargo.toml, under [dependencies]:
reqwest = { version = "0.11", default-features = false, features = ["rustls-tls", "gzip", "brotli"] }
🤖 Prompt for AI Agents
In base_layer/p2p/Cargo.toml around line 26, the reqwest dependency currently
pulls default native-tls/OpenSSL; update the dependency to disable
default-features and enable rustls by setting default-features = false and
features = ["rustls-tls"] so reqwest uses rustls for TLS instead of native
OpenSSL.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the advantages of rustls TLS over OpenSSL?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

, and there's significant funding from major tech companies supporting rustls development.

For Tari's use case, rustls provides better security guarantees, consistent cross-platform behavior, and avoids potential OpenSSL build complexities while maintaining competitive performance.

  (\   /)
 ( ._. )
o_(")(")

🧠 Learnings used
Learnt from: MCozhusheck
PR: tari-project/tari#7478
File: base_layer/p2p/src/peer_seeds.rs:96-104
Timestamp: 2025-09-08T08:50:32.976Z
Learning: MCozhusheck requested to be reminded later to change resolve_download_url() in base_layer/p2p/src/peer_seeds.rs to HTTPS-only (removing HTTP support) once their HTTPS endpoints are available. Currently HTTP support is needed due to infrastructure constraints.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you cut beginning of your message. Can you send it again?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MCozhusheck, here's the complete response about the advantages of rustls TLS over OpenSSL:

Advantages of rustls over OpenSSL:

Memory Safety: rustls is written in Rust, providing memory safety guarantees that prevent buffer overflows, use-after-free, and other memory corruption vulnerabilities that have historically plagued OpenSSL.

Security Track Record: rustls has a much cleaner security history with fewer CVEs. OpenSSL has had numerous high-profile vulnerabilities (Heartbleed, etc.), while rustls's design prevents entire classes of these issues.

Cross-Platform Compilation: rustls eliminates OpenSSL's complex build dependencies and platform-specific quirks. This is especially valuable for cross-compilation scenarios and ensures consistent behavior across different operating systems.

Performance: rustls often matches or exceeds OpenSSL performance while using less memory, with better optimization for modern TLS versions.

Simpler Codebase: rustls has a much smaller, more auditable codebase (~100K lines vs OpenSSL's ~500K+ lines), making security reviews more feasible.

Industry Support: Major companies like Cloudflare, Google, and others actively support and use rustls in production, and there's significant funding from major tech companies supporting rustls development.

For Tari's use case, rustls provides better security guarantees, consistent cross-platform behavior, and avoids potential OpenSSL build complexities while maintaining competitive performance.


🧠 Learnings used
Learnt from: MCozhusheck
PR: tari-project/tari#7478
File: base_layer/p2p/src/peer_seeds.rs:96-104
Timestamp: 2025-09-08T08:50:32.976Z
Learning: MCozhusheck requested to be reminded later to change resolve_download_url() in base_layer/p2p/src/peer_seeds.rs to HTTPS-only (removing HTTP support) once their HTTPS endpoints are available. Currently HTTP support is needed due to infrastructure constraints.

pub mod peer_seeds;
pub mod proto;
pub mod services;
pub mod signature_verification;
Copy link
Contributor

@coderabbitai coderabbitai bot Sep 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Harden the HTTP seed download path (HTTPS-only, allowlist, timeouts, size caps).

Given the new URL-from-DNS flow, enforce:

  • HTTPS only; reject http://.
  • Allowlist host(s) per network (e.g., *.tari.com); reject IP literals and private/loopback ranges to prevent SSRF.
  • Reqwest client with connect/read timeouts and a strict max response size (seed file should be small).

Add telemetry around rejections and signature failures. I can draft a helper enforcing these policies.


🏁 Script executed:

#!/bin/bash
# Check for HTTPS enforcement, allowlist, and timeouts in the codebase
rg -nP --type=rust -C3 '\bresolve_download_url\b|download_(file|signature|hashes)' 
rg -nP --type=rust -C2 'Client::builder\(|reqwest::Client::builder\('
rg -nP --type=rust -C2 '\.timeout\s*\(|\.connect_timeout\s*\('
rg -nP --type=rust -C3 '"http://' -g '!**/*.md'
rg -nP --type=rust -C3 'is_private|is_loopback|IpAddr|allowlist|whitelist'

Length of output: 93587


Harden HTTP seed download in p2p layer

  • In base_layer/p2p/src/peer_seeds.rs::resolve_download_url, reject any URL not starting with https://.
  • In base_layer/p2p/src/signature_verification/http_download (and its callers), enforce a host allowlist (no IP literals or private/loopback ranges), use a Reqwest client with explicit connect/read timeouts and a strict max response‐body size, and reject requests exceeding these limits.
  • Add telemetry counters for URL rejections and signature-verification failures.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As stated before - HTTPS endpoint is not ready yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MCozhusheck I understand - I see from our previous discussion that HTTPS endpoints aren't available yet due to infrastructure constraints.

Given the current HTTP support requirement, consider these interim security measures while maintaining HTTP compatibility:

  • Host allowlist: Still restrict to trusted domains (e.g., *.tari.com) to prevent SSRF attacks
  • Request timeouts: Add connect/read timeouts to prevent hanging requests
  • Response size limits: Cap the maximum response size for seed files
  • URL validation: Validate URL format and reject suspicious patterns

These measures can work with both HTTP and HTTPS URLs until your infrastructure supports HTTPS-only endpoints.


🧠 Learnings used
Learnt from: MCozhusheck
PR: tari-project/tari#7478
File: base_layer/p2p/src/peer_seeds.rs:96-104
Timestamp: 2025-09-08T08:50:32.976Z
Learning: MCozhusheck requested to be reminded later to change resolve_download_url() in base_layer/p2p/src/peer_seeds.rs to HTTPS-only (removing HTTP support) once their HTTPS endpoints are available. Currently HTTP support is needed due to infrastructure constraints.

Learnt from: SWvheerden
PR: tari-project/tari#7301
File: clients/rust/base_node_wallet_client/src/client/http.rs:70-106
Timestamp: 2025-07-11T06:07:36.367Z
Learning: In clients/rust/base_node_wallet_client/src/client/http.rs, the HTTP client server selection logic is intentionally designed to try the local API once and then fallback to the seed server permanently without retry mechanisms. This is the preferred behavior - they want to use local API if available, otherwise use seed server, without continuously retrying the local API.

Comment on lines +1 to +15
-----BEGIN PGP PUBLIC KEY BLOCK-----

mDMEaLl/nhYJKwYBBAHaRw8BAQdAocXM74pI54REY9Y0fESxir/iq8We9wp6JHFP
z8vcdm20Sk1hY2llaiAoVGVzdCBmb3Igc2VlZCBwZWVycyBIVFRQIGRvd25sb2Fk
KSA8bWFjaWVqLmtvenVzemVrQHNwYWNlaW5jaC5jb20+iJMEExYKADsWIQTaz8Pe
9KT58ia7xIFrHRtevPqxvwUCaLl/ngIbAwULCQgHAgIiAgYVCgkICwIEFgIDAQIe
BwIXgAAKCRBrHRtevPqxv5cOAQDR1jrEiLxlsEFLsI6DLd0I7SRQDw+tziT/02ed
7E8wMQD/ZzdO7ZO8oLfneJrrwoWiGk241+yq7ym5uEcBuhnKyQ+0Uk1hY2llaiBL
b3p1c3playAoVGVzdGluZyBzZWVkIHBlZXJzIEhUVFAgZG93bmxvYWQpIDxtYWNp
ZWoua296dXN6ZWtAc3BhY2VpbmNoLmNvbT6IkwQTFgoAOxYhBNrPw970pPnyJrvE
gWsdG168+rG/BQJousXyAhsDBQsJCAcCAiICBhUKCQgLAgQWAgMBAh4HAheAAAoJ
EGsdG168+rG/zIgA/RmbnuNU7/mrSIqV62U5wPPhj3fT7+zR/9Ayn1ME+KbMAQC8
fSb4bOv48TUskOtzWd9j+AuH+2w1bvi9/niKAPgrAA==
=p4US
-----END PGP PUBLIC KEY BLOCK-----
Copy link
Contributor

@coderabbitai coderabbitai bot Sep 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Do not ship a test maintainer key as a trusted root.

The UID in this public key indicates it’s a test key. Shipping it as a trusted maintainer key risks accepting test-signed seed lists in production. Replace with production keys, and gate test keys behind cfg(test)/a test feature. Consider requiring M-of-N signatures to mitigate single-key compromise; also document and verify fingerprints in code.

Run to locate where this key is trusted:


🏁 Script executed:

#!/bin/bash
rg -n 'seed_peers_http\.asc' -C3

Length of output: 1260


Remove test PGP key from production trusted roots

  • Delete gpg_keys/seed_peers_http.asc from MAINTAINERS (base_layer/p2p/src/signature_verification/mod.rs:40) and the SEED_PEERS_PUBLIC_KEY constant in verifier.rs:111; replace with production maintainer key(s).
  • Gate the test key behind #[cfg(test)] or a dedicated test-only feature.
  • Consider requiring M-of-N signatures and embedding/verifying key fingerprints in the code.
🤖 Prompt for AI Agents
In base_layer/p2p/src/signature_verification/gpg_keys/seed_peers_http.asc lines
1-15 (and related refs in base_layer/p2p/src/signature_verification/mod.rs:40
and verifier.rs:111), remove the test PGP public key from the production trusted
roots: delete the seed_peers_http.asc file and remove the MAINTAINERS reference
in mod.rs and the SEED_PEERS_PUBLIC_KEY constant in verifier.rs, then replace
them with the real production maintainer key(s) or fingerprints; if you still
need this test key for unit tests, move it into test-only code guarded by
cfg(test) or a dedicated feature and load it only in test builds; additionally,
update the verifier to support embedding/verifying key fingerprint(s) (or M-of-N
verification) instead of trusting a single test key and ensure any new
production keys are stored/loaded from the correct secure constant or config
location.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way it's right now is fine. This is a public key used only for verification moreover this is a temporary key that will later on get replaced with production key.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MCozhusheck, understood! I appreciate the clarification that this is a temporary verification key that will be replaced with the production key later. Since you're aware of the temporary nature and have plans to replace it, that approach works for this draft implementation.


✏️ Learnings added
Learnt from: MCozhusheck
PR: tari-project/tari#7478
File: base_layer/p2p/src/signature_verification/gpg_keys/seed_peers_http.asc:1-15
Timestamp: 2025-09-08T08:43:50.884Z
Learning: In the Tari codebase, temporary test PGP keys for signature verification are acceptable during development phases when they will be replaced with production keys before final release. The maintainers prefer to keep the implementation simple during drafts rather than over-engineering with test-only conditionals for temporary keys.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines 62 to 66
/// Perform an HTTP GET request and return the response
async fn http_download<T: IntoUrl>(url: T) -> Result<reqwest::Response, SignatureVerificationError> {
let resp = reqwest::get(url).await?.error_for_status()?;
Ok(resp)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add HTTP client with timeouts and UA; avoid reqwest::get

Network calls lack timeouts and UA, risking indefinite hangs at startup. Use a shared Client with sane timeouts.

-use reqwest::IntoUrl;
+use std::time::Duration;
+use reqwest::{Client, IntoUrl};
+use once_cell::sync::Lazy;
@@
-async fn http_download<T: IntoUrl>(url: T) -> Result<reqwest::Response, SignatureVerificationError> {
-    let resp = reqwest::get(url).await?.error_for_status()?;
+static HTTP_CLIENT: Lazy<Client> = Lazy::new(|| {
+    Client::builder()
+        .user_agent(concat!("tari-p2p/", env!("CARGO_PKG_VERSION")))
+        .timeout(Duration::from_secs(15))
+        .connect_timeout(Duration::from_secs(5))
+        .build()
+        .expect("HTTP client build failed")
+});
+
+async fn http_download<T: IntoUrl>(url: T) -> Result<reqwest::Response, SignatureVerificationError> {
+    let resp = HTTP_CLIENT.get(url).send().await?.error_for_status()?;
     Ok(resp)
 }

Note: add once_cell = "1" to Cargo.toml.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Perform an HTTP GET request and return the response
async fn http_download<T: IntoUrl>(url: T) -> Result<reqwest::Response, SignatureVerificationError> {
let resp = reqwest::get(url).await?.error_for_status()?;
Ok(resp)
}
use std::time::Duration;
use reqwest::{Client, IntoUrl};
use once_cell::sync::Lazy;
static HTTP_CLIENT: Lazy<Client> = Lazy::new(|| {
Client::builder()
.user_agent(concat!("tari-p2p/", env!("CARGO_PKG_VERSION")))
.timeout(Duration::from_secs(15))
.connect_timeout(Duration::from_secs(5))
.build()
.expect("HTTP client build failed")
});
/// Perform an HTTP GET request and return the response
async fn http_download<T: IntoUrl>(url: T) -> Result<reqwest::Response, SignatureVerificationError> {
let resp = HTTP_CLIENT.get(url).send().await?.error_for_status()?;
Ok(resp)
}
🤖 Prompt for AI Agents
In base_layer/p2p/src/signature_verification/mod.rs around lines 62-66, replace
the direct reqwest::get call with a shared reqwest::Client configured with
sensible timeouts and a User-Agent: create a static Client using once_cell (add
once_cell = "1" to Cargo.toml), initialize it with
builder().connect_timeout(Duration::from_secs(...)).timeout(Duration::from_secs(...)).user_agent("...")
and reuse it for requests via client.get(url).send().await?.error_for_status()?;
ensure the http_download function uses that shared client and maps reqwest
errors into SignatureVerificationError as before.

@MCozhusheck
Copy link
Contributor Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 8, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
base_layer/p2p/src/signature_verification/mod.rs (1)

62-71: Use a shared reqwest::Client with UA and timeouts (don’t rebuild per call).
Rebuilding the Client for every request is expensive and drops User-Agent. Reuse a static Client with sane timeouts.

-use reqwest::{Client, IntoUrl, Response};
+use reqwest::{Client, IntoUrl, Response};
+use once_cell::sync::Lazy;
@@
-async fn http_download<T: IntoUrl>(url: T) -> Result<Response, SignatureVerificationError> {
-    let client = Client::builder()
-        .timeout(Duration::from_secs(15))
-        .connect_timeout(Duration::from_secs(5))
-        .build()
-        .expect("HTTP client build failed");
-    let resp = client.get(url).send().await?.error_for_status()?;
+static HTTP_CLIENT: Lazy<Client> = Lazy::new(|| {
+    Client::builder()
+        .user_agent(concat!("tari-p2p/", env!("CARGO_PKG_VERSION")))
+        .timeout(Duration::from_secs(15))
+        .connect_timeout(Duration::from_secs(5))
+        .build()
+        .expect("HTTP client build failed")
+});
+
+async fn http_download<T: IntoUrl>(url: T) -> Result<Response, SignatureVerificationError> {
+    let resp = HTTP_CLIENT.get(url).send().await?.error_for_status()?;
     Ok(resp)
 }
🧹 Nitpick comments (10)
base_layer/p2p/src/initialization.rs (5)

463-475: Warn on HTTP; plan HTTPS-only when infra is ready.
Timeout handling is solid. Given your note to move to HTTPS-only later, add a warning when the DNS TXT yields http:// and keep a TODO to enforce HTTPS once endpoints are live.

Leaning on your retrieved learning (you asked to be reminded to switch to HTTPS-only).

 async fn get_url_from_dns(resolver: &mut DnsSeedResolver, addr: &str) -> Result<(String, String), DnsClientError> {
@@
-    let res = (download_url_res, addr.to_string());
+    if download_url_res.starts_with("http://") {
+        warn!(target: LOG_TARGET, "Download URL for `{addr}` is HTTP; signature verification mitigates tampering, but prefer HTTPS. TODO: enforce HTTPS once endpoints are ready.");
+    }
+    let res = (download_url_res, addr.to_string());
     info!(target: LOG_TARGET, "Resolved DNS download URL `{}` in {:.0?}", res.0, timer.elapsed());
     Ok(res)
 }

477-518: Add a defensive size cap on the signed JSON payload.
A small upper bound prevents pathological responses from stalling startup via huge bodies (even if signature fails later).

 async fn download_seed_peers_files(
     (url, _): (String, String),
 ) -> Result<Vec<SeedPeer>, ServiceInitializationError> {
@@
-    let content = verify_signed_file(&url, &format!("{}.asc", url)).await.map_err(|e| {
+    let content = verify_signed_file(&url, &format!("{}.asc", url)).await.map_err(|e| {
         warn!(target: LOG_TARGET, "Failed to verify seed nodes file from {}: {}", url, e);
         anyhow!("Signature verification failed: {}", e)
     })?;
 
+    // Hard ceiling: seed list should be tiny (KBs). Cap at 512 KiB to avoid abuse.
+    const MAX_SEED_FILE_BYTES: usize = 512 * 1024;
+    if content.as_bytes().len() > MAX_SEED_FILE_BYTES {
+        warn!(target: LOG_TARGET, "Seed nodes file from {} exceeds {} bytes", url, MAX_SEED_FILE_BYTES);
+        return Err(anyhow!("Seed nodes file too large").into());
+    }

520-594: Improve failure signaling and a small typo.

  • If no download URLs resolve, return a clearer error before attempting downloads.
  • Typo: downlaod_res -> download_res.
-    let resolved_urls: Vec<(String, String)> = future::join_all(resolving)
+    let resolved_urls: Vec<(String, String)> = future::join_all(resolving)
         .await
         .into_iter()
         .filter_map(|result| match result {
             Ok(url_pair) => Some(url_pair),
             Err(e) => {
                 warn!(target: LOG_TARGET, "Failed to resolve DNS seed: {}", e);
                 None
             },
         })
         .collect();
 
+    if resolved_urls.is_empty() {
+        return Err(anyhow!("No download URLs resolved from DNS seeds"));
+    }
@@
-    let seed_peers = future::join_all(downloading).await;
-    if seed_peers.iter().all(|downlaod_res| downlaod_res.is_err()) {
+    let seed_peers = future::join_all(downloading).await;
+    if seed_peers.iter().all(|download_res| download_res.is_err()) {
         return Err(anyhow!("Failed to download and verify seed peer files"));
     }

635-657: Logging nit: “Resolved DNS seed …” logs on errors too.
This message prints even when the lookup failed. Make it a neutral “Completed DNS lookup …” (debug) or move the info! into the Ok branch below.

-                info!(target: LOG_TARGET, "Resolved DNS seed `{}` in {:.0?}", addr, timer.elapsed());
+                debug!(target: LOG_TARGET, "Completed DNS lookup for `{}` in {:.0?}", addr, timer.elapsed());
                 res

728-733: Fallback path is fine; consider logging result size.
Optional: after fallback, log how many peers were obtained to help operators see whether HTTP or DNS yielded seeds.

base_layer/p2p/src/signature_verification/mod.rs (3)

43-49: Parse maintainer keys once (avoid repeated from_string).
maintainers() reparses keys on each call. Cache parsed keys in a Lazy to cut overhead.

-/// Returns an iterator over all configured maintainer public keys
-pub fn maintainers() -> impl Iterator<Item = SignedPublicKey> {
-    MAINTAINERS.iter().map(|s| {
-        let (pk, _) = SignedPublicKey::from_string(s).expect("Malformed maintainer PGP signature");
-        pk
-    })
-}
+use once_cell::sync::Lazy;
+static MAINTAINER_KEYS: Lazy<Vec<SignedPublicKey>> = Lazy::new(|| {
+    MAINTAINERS
+        .iter()
+        .map(|s| SignedPublicKey::from_string(s).expect("Malformed maintainer PGP signature").0)
+        .collect()
+});
+
+/// Returns an iterator over all configured maintainer public keys
+pub fn maintainers() -> impl Iterator<Item = SignedPublicKey> + 'static {
+    MAINTAINER_KEYS.iter().cloned()
+}

Note: add once_cell = "1" to Cargo.toml if not present.


120-136: Consider reusing a static verifier.
Each call collects maintainer keys and builds a new SignedMessageVerifier. Optional: cache a Lazy to reduce work.


139-159: Exact-bytes verification note (low risk).
You use Response.text() for both JSON and ASCII armor. Given ASCII content this is fine; if future targets include non-UTF8, prefer bytes() to avoid transcode surprises.

base_layer/p2p/src/signature_verification/verifier.rs (2)

67-103: Differentiate “no match” from “bad format”.
When no hash matches, returning InvalidHashFormat conflates two cases. If feasible, introduce a HashNotFound error to improve observability.


151-187: Tests: drop println! noise.
The prints are useful for local debugging but add noise in CI logs. Consider removing or gating behind env logging.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ab0d553 and dbb9760.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • base_layer/p2p/src/initialization.rs (6 hunks)
  • base_layer/p2p/src/peer_seeds.rs (1 hunks)
  • base_layer/p2p/src/signature_verification/mod.rs (1 hunks)
  • base_layer/p2p/src/signature_verification/verifier.rs (1 hunks)
  • common/config/presets/b_peer_seeds.toml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • common/config/presets/b_peer_seeds.toml
🚧 Files skipped from review as they are similar to previous changes (1)
  • base_layer/p2p/src/peer_seeds.rs
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: MCozhusheck
PR: tari-project/tari#7478
File: base_layer/p2p/src/peer_seeds.rs:96-104
Timestamp: 2025-09-08T08:50:32.976Z
Learning: MCozhusheck requested to be reminded later to change resolve_download_url() in base_layer/p2p/src/peer_seeds.rs to HTTPS-only (removing HTTP support) once their HTTPS endpoints are available. Currently HTTP support is needed due to infrastructure constraints.
📚 Learning: 2025-09-08T08:50:32.976Z
Learnt from: MCozhusheck
PR: tari-project/tari#7478
File: base_layer/p2p/src/peer_seeds.rs:96-104
Timestamp: 2025-09-08T08:50:32.976Z
Learning: MCozhusheck requested to be reminded later to change resolve_download_url() in base_layer/p2p/src/peer_seeds.rs to HTTPS-only (removing HTTP support) once their HTTPS endpoints are available. Currently HTTP support is needed due to infrastructure constraints.

Applied to files:

  • base_layer/p2p/src/initialization.rs
  • base_layer/p2p/src/signature_verification/mod.rs
📚 Learning: 2025-07-09T08:33:29.320Z
Learnt from: hansieodendaal
PR: tari-project/tari#7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.

Applied to files:

  • base_layer/p2p/src/initialization.rs
📚 Learning: 2025-07-09T08:33:29.320Z
Learnt from: hansieodendaal
PR: tari-project/tari#7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.

Applied to files:

  • base_layer/p2p/src/initialization.rs
📚 Learning: 2025-07-09T08:33:29.320Z
Learnt from: hansieodendaal
PR: tari-project/tari#7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.

Applied to files:

  • base_layer/p2p/src/initialization.rs
🔇 Additional comments (5)
base_layer/p2p/src/initialization.rs (3)

31-35: Imports look correct and necessary.
anyhow, Deserialize, and serde_json are used below. No issues.


95-95: Good: centralizes signature verification.
Using verify_signed_file here tightly scopes trust to embedded maintainer keys.


763-924: Tests add good coverage for parsing and verification.
Solid positive/negative cases; nice to see real-key parsing. No changes requested.

base_layer/p2p/src/signature_verification/mod.rs (1)

161-192: Tests are focused and helpful.
ASCII armored parsing and maintainer count checks look good.

base_layer/p2p/src/signature_verification/verifier.rs (1)

31-65: Verifier design LGTM.
API is small and clear; logging levels are appropriate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant