Skip to content

Conversation

@wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Feb 8, 2026

This PR is considered after #5194 #5204 and #5205

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced registry-driven system for converting dpmodel components to PyTorch-compatible wrappers
    • Added PyTorch-friendly descriptor implementations with improved export/tracing support
    • Implemented exclude mask wrappers for enhanced model integration
  • Improvements

    • Refactored EmbeddingNet to a concrete class with explicit, stable public API
    • Enhanced serialization and deserialization with improved compatibility handling
    • Centralized attribute handling for better consistency across wrappers
  • Tests

    • Added comprehensive tests for EmbeddingNet behavior and forward pass validation
    • Extended test coverage for PyTorch integration and cross-backend consistency

@wanghan-iapcm wanghan-iapcm requested a review from njzjz February 8, 2026 14:18
@github-actions github-actions bot added the Python label Feb 8, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @wanghan-iapcm, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly advances the DeepMD-kit's integration with PyTorch by introducing a new exportable backend. This backend enables dpmodel components to be wrapped as native PyTorch modules, facilitating torch.export for deployment and broader PyTorch ecosystem compatibility. The changes also include crucial improvements to array handling for better device and dtype consistency, a refactoring of the EmbeddingNet class for improved maintainability, and more robust PyTorch threading configurations.

Highlights

  • New PyTorch Exportable Backend: A new pt_expt backend has been introduced, providing torch.nn.Module wrappers for dpmodel classes to enable torch.export compatibility. This allows DeepMD models to be exported and integrated more seamlessly into the PyTorch ecosystem.
  • Automatic Attribute Conversion: The pt_expt backend features automatic conversion of numpy arrays to torch.Tensor buffers and nested dpmodel objects to their corresponding pt_expt torch.nn.Module wrappers. This simplifies the development and maintenance of PyTorch-compatible models.
  • Descriptor Support: The se_t and se_t_tebd descriptors are now fully supported within the new pt_expt backend, along with se_e2_a and se_r.
  • Robust Threading Configuration: Improvements have been made to the PyTorch threading configuration (torch.set_num_interop_threads and torch.set_num_threads) to prevent RuntimeError when these functions are called multiple times, ensuring more stable initialization.
  • Array API Compatibility Enhancements: Several dpmodel files have been updated to improve Array API compatibility, including adding TypeError to exception handling in to_numpy_array and ensuring xp.asarray, xp.zeros, xp.ones, and xp.concat calls correctly infer and utilize device and dtype.
  • EmbeddingNet Refactoring: The EmbeddingNet in deepmd.dpmodel.utils.network has been refactored from a factory function into a concrete class, enhancing type hinting, subclassing capabilities, and overall code clarity.
  • PyTorch Indexing Compatibility: Explicit int64 casting has been added for index arrays in dpa1 and se_t_tebd descriptors to ensure compatibility with PyTorch's take_along_dim operation.
Changelog
  • deepmd/backend/pt_expt.py
    • Added PyTorchExportableBackend class, registering 'pt-expt' and 'pytorch-exportable' backends.
    • Defined backend features and availability check for torch.
  • deepmd/dpmodel/common.py
    • Added TypeError to exception handling in to_numpy_array.
    • Modified xp.asarray to explicitly move to 'cpu' device for numpy compatibility.
  • deepmd/dpmodel/descriptor/descriptor.py
    • Replaced numpy import with array_api_compat.
    • Updated extend_descrpt_stat to use array_api_compat for device and dtype inference in array creation and concatenation.
  • deepmd/dpmodel/descriptor/dpa1.py
    • Added device argument to xp.asarray calls in compute_input_stats.
    • Added explicit xp.astype(idx, xp.int64) cast for indices in call method for PyTorch compatibility.
  • deepmd/dpmodel/descriptor/repflows.py
    • Added device argument to xp.asarray calls in compute_input_stats.
  • deepmd/dpmodel/descriptor/repformers.py
    • Added device argument to xp.asarray calls in compute_input_stats.
  • deepmd/dpmodel/descriptor/se_e2_a.py
    • Added device argument to xp.asarray calls in compute_input_stats.
    • Added device argument to xp.zeros call in call method.
  • deepmd/dpmodel/descriptor/se_r.py
    • Added device argument to xp.asarray calls in compute_input_stats.
    • Added device argument to xp.zeros call in call method.
  • deepmd/dpmodel/descriptor/se_t.py
    • Added device argument to xp.asarray calls in compute_input_stats.
    • Added device argument to xp.zeros call in call method.
  • deepmd/dpmodel/descriptor/se_t_tebd.py
    • Added device argument to xp.asarray calls in compute_input_stats.
    • Added device argument to xp.zeros call in call method.
    • Added explicit xp.astype(idx, xp.int64) cast for indices in call method.
  • deepmd/dpmodel/utils/network.py
    • Refactored EmbeddingNet from a factory function to a concrete class.
    • Implemented serialize and deserialize methods for the new EmbeddingNet class.
  • deepmd/dpmodel/utils/type_embed.py
    • Added device argument to xp.eye and xp.zeros calls.
    • Modified change_type_map to use xp.concat and xp.asarray with device argument.
  • deepmd/env.py
    • Corrected environment variable fallback for DP_INTER_OP_PARALLELISM_THREADS to TF_INTER_OP_PARALLELISM_THREADS.
  • deepmd/pt/utils/env.py
    • Swapped inter_nthreads and intra_nthreads assignment.
    • Added try-except RuntimeError blocks and checks for existing thread counts when setting torch threads.
  • deepmd/pt_expt/init.py
    • Added SPDX license identifier.
  • deepmd/pt_expt/common.py
    • Introduced _DPMODEL_TO_PT_EXPT registry for dpmodel to pt_expt module conversion.
    • Added register_dpmodel_mapping and try_convert_module functions.
    • Implemented dpmodel_setattr for automatic attribute handling (numpy to torch buffers, dpmodel to pt_expt modules).
    • Added to_torch_array utility for device-aware tensor conversion.
  • deepmd/pt_expt/descriptor/init.py
    • Imports and registers se_t_tebd_block and defines BaseDescriptor and specific descriptor classes.
  • deepmd/pt_expt/descriptor/base_descriptor.py
    • Defined BaseDescriptor using torch.Tensor.
  • deepmd/pt_expt/descriptor/se_e2_a.py
    • Implemented DescrptSeA as a torch.nn.Module wrapper for dpmodel.descriptor.se_e2_a.DescrptSeAArrayAPI.
  • deepmd/pt_expt/descriptor/se_r.py
    • Implemented DescrptSeR as a torch.nn.Module wrapper for dpmodel.descriptor.se_r.DescrptSeR.
  • deepmd/pt_expt/descriptor/se_t.py
    • Implemented DescrptSeT as a torch.nn.Module wrapper for dpmodel.descriptor.se_t.DescrptSeT.
  • deepmd/pt_expt/descriptor/se_t_tebd.py
    • Implemented DescrptSeTTebd as a torch.nn.Module wrapper for dpmodel.descriptor.se_t_tebd.DescrptSeTTebd.
  • deepmd/pt_expt/descriptor/se_t_tebd_block.py
    • Implemented DescrptBlockSeTTebd wrapper and registered its mapping.
  • deepmd/pt_expt/utils/init.py
    • Exports AtomExcludeMask, PairExcludeMask, NetworkCollection, TypeEmbedNet.
  • deepmd/pt_expt/utils/env.py
    • Defined environment variables and constants specific to the pt_expt backend, including device and threading logic.
  • deepmd/pt_expt/utils/exclude_mask.py
    • Implemented AtomExcludeMask and PairExcludeMask as torch.nn.Module wrappers and registered their mappings.
  • deepmd/pt_expt/utils/network.py
    • Introduced TorchArrayParam for trainable parameters.
    • Implemented NativeLayer, NativeNet, EmbeddingNet, FittingNet, NetworkCollection, and LayerNorm as torch.nn.Module wrappers for their dpmodel counterparts, including layer conversion and mapping registration.
  • deepmd/pt_expt/utils/type_embed.py
    • Implemented TypeEmbedNet as a torch.nn.Module wrapper and registered its mapping.
  • pyproject.toml
    • Added deepmd.pt_expt to banned-module-level-imports.
    • Added deepmd/pt_expt/** and source/tests/pt_expt/** to tool.ruff.per-file-ignores.
  • source/tests/common/dpmodel/test_network.py
    • Added tests for the refactored EmbeddingNet class, covering concrete class verification, forward pass, and trainable parameter configurations.
  • source/tests/consistent/common.py
    • Added INSTALLED_PT_EXPT flag, pt_expt_class, skip_pt_expt attributes, and eval_pt_expt method to CommonTest.
    • Updated RefBackend enum and reference backend selection logic to include PT_EXPT.
    • Added test_pt_expt_consistent_with_ref and test_pt_expt_self_consistent tests.
  • source/tests/consistent/descriptor/common.py
    • Updated torch and PT_DEVICE import conditions to include INSTALLED_PT_EXPT.
    • Added eval_pt_expt_descriptor method.
  • source/tests/consistent/descriptor/test_se_e2_a.py
    • Integrated pt_expt.descriptor.se_e2_a.DescrptSeA into consistent tests.
  • source/tests/consistent/descriptor/test_se_r.py
    • Integrated pt_expt.descriptor.se_r.DescrptSeR into consistent tests.
  • source/tests/consistent/descriptor/test_se_t.py
    • Integrated pt_expt.descriptor.se_t.DescrptSeT into consistent tests.
  • source/tests/consistent/descriptor/test_se_t_tebd.py
    • Integrated pt_expt.descriptor.se_t_tebd.DescrptSeTTebd into consistent tests.
  • source/tests/pt/test_env_threads.py
    • Added test for deepmd.pt.utils.env to handle RuntimeError during thread setting.
  • source/tests/pt_expt/init.py
    • Added SPDX license identifier.
  • source/tests/pt_expt/conftest.py
    • Added pytest.importorskip("torch").
  • source/tests/pt_expt/descriptor/init.py
    • Added SPDX license identifier.
  • source/tests/pt_expt/descriptor/test_se_e2_a.py
    • Added consistency and exportability tests for pt_expt.descriptor.se_e2_a.DescrptSeA.
  • source/tests/pt_expt/descriptor/test_se_r.py
    • Added consistency and exportability tests for pt_expt.descriptor.se_r.DescrptSeR.
  • source/tests/pt_expt/descriptor/test_se_t.py
    • Added consistency and exportability tests for pt_expt.descriptor.se_t.DescrptSeT.
  • source/tests/pt_expt/descriptor/test_se_t_tebd.py
    • Added consistency and exportability tests for pt_expt.descriptor.se_t_tebd.DescrptSeTTebd.
  • source/tests/pt_expt/model/init.py
    • Added SPDX license identifier.
  • source/tests/pt_expt/utils/init.py
    • Added SPDX license identifier.
  • source/tests/pt_expt/utils/test_common.py
    • Added tests for to_torch_array utility.
  • source/tests/pt_expt/utils/test_env.py
    • Added test for deepmd.pt_expt.utils.env to handle RuntimeError during thread setting.
  • source/tests/pt_expt/utils/test_exclusion_mask.py
    • Added tests for AtomExcludeMask and PairExcludeMask in pt_expt.
  • source/tests/pt_expt/utils/test_network.py
    • Added extensive tests for pt_expt.utils.network.EmbeddingNet and NativeLayer, covering parameter handling, serialization, and cross-backend consistency.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the new feature label Feb 8, 2026
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new pt_expt (PyTorch Exportable) backend, which is a significant and well-executed feature addition. The implementation cleverly wraps the existing array-api-compat-based dpmodel classes with torch.nn.Module wrappers, enabling them for PyTorch's ecosystem. The use of a generic dpmodel_setattr helper and a registry system to automate the conversion of numpy arrays and nested objects is an excellent design choice that enhances maintainability.

The PR also includes a crucial refactoring of EmbeddingNet to support this extensible design and fixes several bugs in other parts of the codebase, such as in thread handling and environment variable processing. The new backend and changes are thoroughly covered by new tests.

My main suggestion for improvement is to address the code duplication in the environment setup files for the pt and pt_expt backends, which could be refactored.

Comment on lines 1 to 127
# SPDX-License-Identifier: LGPL-3.0-or-later
import logging
import multiprocessing
import os
import sys

import numpy as np

from deepmd.common import (
VALID_PRECISION,
)
from deepmd.env import (
GLOBAL_ENER_FLOAT_PRECISION,
GLOBAL_NP_FLOAT_PRECISION,
get_default_nthreads,
set_default_nthreads,
)

log = logging.getLogger(__name__)
import torch

if sys.platform != "win32":
try:
multiprocessing.set_start_method("fork", force=True)
log.debug("Successfully set multiprocessing start method to 'fork'.")
except (RuntimeError, ValueError) as err:
log.warning(f"Could not set multiprocessing start method: {err}")
else:
log.debug("Skipping fork start method on Windows (not supported).")

SAMPLER_RECORD = os.environ.get("SAMPLER_RECORD", False)
DP_DTYPE_PROMOTION_STRICT = os.environ.get("DP_DTYPE_PROMOTION_STRICT", "0") == "1"
try:
# only linux
ncpus = len(os.sched_getaffinity(0))
except AttributeError:
ncpus = os.cpu_count()
NUM_WORKERS = int(os.environ.get("NUM_WORKERS", min(4, ncpus)))
if multiprocessing.get_start_method() != "fork":
# spawn or forkserver does not support NUM_WORKERS > 0 for DataLoader
log.warning(
"NUM_WORKERS > 0 is not supported with spawn or forkserver start method. "
"Setting NUM_WORKERS to 0."
)
NUM_WORKERS = 0

# Make sure DDP uses correct device if applicable
LOCAL_RANK = os.environ.get("LOCAL_RANK")
LOCAL_RANK = int(0 if LOCAL_RANK is None else LOCAL_RANK)

if os.environ.get("DEVICE") == "cpu" or torch.cuda.is_available() is False:
DEVICE = torch.device("cpu")
else:
DEVICE = torch.device(f"cuda:{LOCAL_RANK}")

JIT = False
CACHE_PER_SYS = 5 # keep at most so many sets per sys in memory
ENERGY_BIAS_TRAINABLE = True
CUSTOM_OP_USE_JIT = False

PRECISION_DICT = {
"float16": torch.float16,
"float32": torch.float32,
"float64": torch.float64,
"half": torch.float16,
"single": torch.float32,
"double": torch.float64,
"int32": torch.int32,
"int64": torch.int64,
"bfloat16": torch.bfloat16,
"bool": torch.bool,
}
GLOBAL_PT_FLOAT_PRECISION = PRECISION_DICT[np.dtype(GLOBAL_NP_FLOAT_PRECISION).name]
GLOBAL_PT_ENER_FLOAT_PRECISION = PRECISION_DICT[
np.dtype(GLOBAL_ENER_FLOAT_PRECISION).name
]
PRECISION_DICT["default"] = GLOBAL_PT_FLOAT_PRECISION
assert VALID_PRECISION.issubset(PRECISION_DICT.keys())
# cannot automatically generated
RESERVED_PRECISION_DICT = {
torch.float16: "float16",
torch.float32: "float32",
torch.float64: "float64",
torch.int32: "int32",
torch.int64: "int64",
torch.bfloat16: "bfloat16",
torch.bool: "bool",
}
assert set(PRECISION_DICT.values()) == set(RESERVED_PRECISION_DICT.keys())
DEFAULT_PRECISION = "float64"

# throw warnings if threads not set
set_default_nthreads()
intra_nthreads, inter_nthreads = get_default_nthreads()
if inter_nthreads > 0: # the behavior of 0 is not documented
# torch.set_num_interop_threads can only be called once per process.
# Guard to avoid RuntimeError when both pt and pt_expt env modules are imported.
try:
if torch.get_num_interop_threads() != inter_nthreads:
torch.set_num_interop_threads(inter_nthreads)
except RuntimeError as err:
log.warning(f"Could not set torch interop threads: {err}")
if intra_nthreads > 0:
# torch.set_num_threads can also fail if called after threads are created.
try:
if torch.get_num_threads() != intra_nthreads:
torch.set_num_threads(intra_nthreads)
except RuntimeError as err:
log.warning(f"Could not set torch intra threads: {err}")

__all__ = [
"CACHE_PER_SYS",
"CUSTOM_OP_USE_JIT",
"DEFAULT_PRECISION",
"DEVICE",
"ENERGY_BIAS_TRAINABLE",
"GLOBAL_ENER_FLOAT_PRECISION",
"GLOBAL_NP_FLOAT_PRECISION",
"GLOBAL_PT_ENER_FLOAT_PRECISION",
"GLOBAL_PT_FLOAT_PRECISION",
"JIT",
"LOCAL_RANK",
"NUM_WORKERS",
"PRECISION_DICT",
"RESERVED_PRECISION_DICT",
"SAMPLER_RECORD",
]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This file is almost an identical copy of deepmd/pt/utils/env.py. This significant code duplication can lead to maintenance issues, where a change in one file might not be propagated to the other.

Consider refactoring the common environment settings (device selection, precision dictionaries, thread management, etc.) into a shared module, for example, deepmd.torch_utils.env, which both pt and pt_expt backends can import and use. The backend-specific parts, if any, can then be handled within their respective env.py files.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 621c7ccbbc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

class PyTorchExportableBackend(Backend):
"""PyTorch exportable backend."""

name = "PyTorch Exportable"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Rename pt-expt backend to a key-compatible identifier

In deepmd/main.py, backend CLI options are normalized to backend.name.lower() and then validated against registered backend keys; with name = "PyTorch Exportable", --pt-expt/--backend pytorch-exportable resolve to "pytorch exportable", which is not a registered key (pt-expt/pytorch-exportable), so selecting this backend via CLI fails with Unknown backend before any command runs.

Useful? React with 👍 / 👎.

Comment on lines 37 to 41
features: ClassVar[Backend.Feature] = (
Backend.Feature.ENTRY_POINT
| Backend.Feature.DEEP_EVAL
| Backend.Feature.NEIGHBOR_STAT
| Backend.Feature.IO

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Remove unsupported feature flags from pt-expt backend

This backend advertises DEEP_EVAL, NEIGHBOR_STAT, and IO, so .pte files pass feature/suffix filtering in common workflows (for example dp test via format_model_suffix and backend detection), but the corresponding hooks (deep_eval, neighbor_stat, serialize_hook, deserialize_hook) still raise NotImplementedError, which causes accepted commands on .pte models to fail at runtime.

Useful? React with 👍 / 👎.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

📝 Walkthrough

Walkthrough

This PR refactors EmbeddingNet from a factory-generated to a concrete class with explicit serialization, and introduces a comprehensive registry-driven system for converting dpmodel instances to PyTorch wrappers (pt_expt). New descriptor, network, and exclude mask wrappers integrate with the registry for seamless dpmodel-to-torch module conversion.

Changes

Cohort / File(s) Summary
EmbeddingNet Concrete Class
deepmd/dpmodel/utils/network.py
Replaced factory-generated EmbeddingNet with concrete EmbeddingNet(NativeNet) class featuring explicit \init\, serialize, and deserialize methods with stable constructor signature and metadata storage.
Conversion Registry & Infrastructure
deepmd/pt_expt/common.py
Introduced _DPMODEL_TO_PT_EXPT registry with public APIs: register_dpmodel_mapping(), try_convert_module(), dpmodel_setattr(), and to_torch_array() for centralized value-type-based dpmodel-to-pt_expt conversion and attribute handling.
PyTorch Network Wrappers
deepmd/pt_expt/utils/network.py
Implemented complete PyTorch-friendly network adapters: TorchArrayParam, NativeLayer, NativeNet, EmbeddingNet, FittingNet, NetworkCollection, and LayerNorm, with registry mappings for serialization/deserialization and device management.
Descriptor Wrappers
deepmd/pt_expt/descriptor/se_e2_a.py, deepmd/pt_expt/descriptor/se_r.py
Added DescrptSeA and DescrptSeR as torch.nn.Module-compatible wrappers around dpmodel descriptors, registered with BaseDescriptor under "se_e2_a_expt"/"se_a_expt" and "se_e2_r_expt"/"se_r_expt" identifiers respectively.
Exclude Mask Wrappers
deepmd/pt_expt/utils/exclude_mask.py
Introduced AtomExcludeMask and PairExcludeMask as PyTorch wrappers around DP models with registry mappings for automatic instance conversion.
Public API Updates & Test Coverage
deepmd/pt_expt/utils/__init__.py, source/tests/common/dpmodel/test_network.py, source/tests/pt_expt/utils/test_network.py
Exported NetworkCollection; added tests for concrete EmbeddingNet class identity and forward semantics; comprehensive integration tests for pt_expt wrappers covering serialization round-trips, registry conversions, and trainable parameter variants.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

  • feat: new backend pytorch exportable. #5194: Directly related through pt_expt network wrappers and registry system that convert/deserialize EmbeddingNetDP into PyTorch EmbeddingNet, extending the concrete class refactoring pattern.

Suggested reviewers

  • njzjz
  • iProzd
🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title claims to implement se_t and se_t_tebd descriptors, but the changeset shows work on embedding networks, descriptor wrappers (se_e2_a, se_r), and supporting infrastructure with no se_t or se_t_tebd implementation. Update the title to reflect actual changes: e.g., 'feat(pt_expt): add PyTorch wrappers for embedding networks and descriptors' or clarify if se_t/se_t_tebd descriptors are being added elsewhere.
Docstring Coverage ⚠️ Warning Docstring coverage is 32.34% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
deepmd/dpmodel/utils/network.py (2)

816-816: Mutable default argument for neuron.

neuron: list[int] = [24, 48, 96] uses a mutable default, which Ruff flags as B006. The same pattern exists in the factory version (line 711), but since this is a new concrete class, it's a good time to fix it.

Suggested fix
     def __init__(
         self,
         in_dim: int,
-        neuron: list[int] = [24, 48, 96],
+        neuron: list[int] | None = None,
         activation_function: str = "tanh",
         resnet_dt: bool = False,
         precision: str = DEFAULT_PRECISION,
         seed: int | list[int] | None = None,
         bias: bool = True,
         trainable: bool | list[bool] = True,
     ) -> None:
+        if neuron is None:
+            neuron = [24, 48, 96]
         layers = []

886-893: Deserialize rebuilds layers twice — once in cls(**data), then overwritten immediately.

cls(**data) constructs all layers from scratch (random init → serialize → deserialize), and then lines 889–892 immediately replace them with the serialized data. This is the same pattern as the factory version so it's consistent, but worth noting the redundant work for future optimization.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In `@deepmd/backend/pt_expt.py`:
- Around line 37-42: The class-level features tuple declares
Backend.Feature.DEEP_EVAL, NEIGHBOR_STAT, and IO but the corresponding members
deep_eval, neighbor_stat, serialize_hook and deserialize_hook raise
NotImplementedError; fix this by either removing those unsupported flags from
the features tuple in the class (update the features variable to only include
ENTRY_POINT and any truly supported flags) or keep the flags but add a clear
TODO/roadmap comment above the features declaration referencing deep_eval,
neighbor_stat, serialize_hook and deserialize_hook and a target milestone for
implementing them so callers won’t be misled; adjust the features tuple or add
the comment in pt_expt.py accordingly.

In `@deepmd/dpmodel/utils/network.py`:
- Around line 876-896: The current EmbeddingNet.deserialize assumes obj.layers
is non-empty by doing layer_type = type(obj.layers[0]); add a guard to handle
empty obj.layers: if obj.layers is non-empty keep the existing behavior (use
layer_type = type(obj.layers[0]) and deserialize each serialized layer with
layer_type.deserialize), otherwise fall back to per-dict deserialization by
resolving each serialized layer's class (e.g. via the layer dict's "@class"
entry or a LayerBase.deserialize factory) and create obj.layers from those
deserialized instances; update the code in deserialize to check if not
obj.layers before computing layer_type and choose the appropriate
deserialization path.

In `@deepmd/pt_expt/utils/type_embed.py`:
- Around line 14-15: Remove the redundant noqa directive from the side-effect
import: locate the import statement that brings in deepmd.pt_expt.utils.network
(the comment above it explains it's to register EmbeddingNet) and delete the
trailing "# noqa: F401" so the import remains but the unnecessary noqa is
removed; ensure the import line still stays in place and no other changes are
made to the comment or import name.

In `@source/tests/consistent/descriptor/test_se_r.py`:
- Line 30: The fallback variable names are inconsistent: when INSTALLED_PT is
False the diff sets DescrptSeAPT and DescrptSeATF to None but other code (e.g.,
pt_class) expects DescrptSeRPT and DescrptSeRTF; update the fallback assignments
so the variables referenced elsewhere are defined (set DescrptSeRPT = None and
DescrptSeRTF = None when INSTALLED_PT is False), or alternatively rename usages
in pt_class to match the existing fallback names, ensuring DescrptSeRPT and
DescrptSeRTF exist to avoid a NameError.

In `@source/tests/pt_expt/descriptor/test_se_t_tebd.py`:
- Around line 149-159: The test constructs an inputs tuple that places
type_embedding as the 4th positional argument, which binds to forward's
extended_atype_embd (signature: forward(self, extended_coord, extended_atype,
nlist, extended_atype_embd=None, mapping=None, type_embedding=None)), causing
type_embedding to be exported as None; update the call that exports dd0 so
type_embedding is passed into the correct parameter—either insert explicit None
placeholders for extended_atype_embd and mapping before type_embedding in the
inputs tuple, or switch to exporting with keyword args (e.g., pass
type_embedding=...) when calling torch.export.export(dd0, ...), ensuring
dd0.forward receives type_embedding correctly.

In `@source/tests/pt_expt/utils/test_network.py`:
- Around line 162-164: The test uses an unseeded RNG (rng =
np.random.default_rng()) which makes the test non-deterministic; change the RNG
initialization to use the shared GLOBAL_SEED (e.g., rng =
np.random.default_rng(GLOBAL_SEED)) so x_np is reproducible, ensure GLOBAL_SEED
is imported/available in this test module, and keep the subsequent x_np =
rng.standard_normal((5, self.in_dim)) call unchanged.
🧹 Nitpick comments (10)
deepmd/pt_expt/utils/env.py (2)

31-31: SAMPLER_RECORD may unintentionally be truthy for string values like "0" or "false".

os.environ.get("SAMPLER_RECORD", False) returns a string when the variable is set, so SAMPLER_RECORD="0" would be truthy. If this matches the existing pattern in deepmd/pt/utils/env.py, it's fine to keep consistent, but worth noting.


1-127: Significant code duplication with deepmd/pt/utils/env.py.

This file appears to be largely a copy of the PT backend's env module. While this may be intentional to keep pt_expt independent, you might consider extracting the shared logic (precision dicts, device detection, thread config) into a common base module to reduce maintenance burden. Both modules would then only override backend-specific constants (e.g., JIT, CUSTOM_OP_USE_JIT).

deepmd/dpmodel/utils/network.py (1)

816-826: Mutable default argument for neuron.

Ruff B006 flags neuron: list[int] = [24, 48, 96]. While it's safe here (the list is never mutated in-place), this is a well-known Python footgun. That said, the same pattern exists in the factory function on line 714 and is consistent across the codebase, so changing it now might be out of scope.

Optional fix
     def __init__(
         self,
         in_dim: int,
-        neuron: list[int] = [24, 48, 96],
+        neuron: list[int] | None = None,
         activation_function: str = "tanh",
         resnet_dt: bool = False,
         precision: str = DEFAULT_PRECISION,
         seed: int | list[int] | None = None,
         bias: bool = True,
         trainable: bool | list[bool] = True,
     ) -> None:
+        if neuron is None:
+            neuron = [24, 48, 96]
         layers = []
source/tests/consistent/common.py (1)

70-79: Pre-existing: duplicate "CommonTest" in __all__.

Line 78 duplicates line 77. Not introduced by this PR, but worth noting.

deepmd/pt_expt/utils/network.py (1)

27-37: Remove unused noqa directive.

Ruff correctly flags noqa: PYI034 as unnecessary here since PYI034 only applies to .pyi stub files.

🧹 Proposed fix
-    def __new__(  # noqa: PYI034
+    def __new__(
         cls, data: Any = None, requires_grad: bool = True
     ) -> "TorchArrayParam":
source/tests/pt_expt/utils/test_common.py (1)

14-23: Consider adding a test for None input.

Per the to_torch_array docstring, passing None should return None. This edge case isn't covered.

Suggested addition
def test_to_torch_array_none() -> None:
    assert to_torch_array(None) is None
deepmd/pt_expt/descriptor/__init__.py (1)

3-3: Remove unused noqa directive per static analysis.

Ruff (RUF100) reports the # noqa: F401 directive is unnecessary here since the F401 rule wouldn't trigger for this import.

Proposed fix
-from . import se_t_tebd_block  # noqa: F401
+from . import se_t_tebd_block  # register converters
source/tests/pt_expt/descriptor/test_se_t_tebd.py (1)

99-100: Track or resolve the TODO about gr being None.

The comment # TODO: investigate why gr is None suggests an unresolved question about the descriptor's output. If gr is expected to be non-None for se_t_tebd, this could indicate a bug in either the pt_expt or dpmodel implementation.

Would you like me to open an issue to track this, or help investigate the gr return path?

source/tests/pt_expt/descriptor/test_se_t.py (2)

42-44: Consider testing exclude_types combinations for completeness.

Unlike test_se_r.py and test_se_e2_a.py (which iterate over exclude_types variations like [[], [[0, 1]], [[1, 1]]]), this test only varies resnet_dt and precision. The consistent test (test_se_t.py) does parameterize over excluded_types, so it may be worth exercising that path here too (especially the reinit_exclude codepath tested in test_se_e2_a.py).


65-65: Unused variables gr1 and gr2 flagged by Ruff.

Per the comment on line 90, se_t returns None for gr/g2/h2, so these are intentionally not compared. Prefix with _ to suppress the lint warning.

-            rd1, gr1, _, _, sw1 = dd1(
+            rd1, _gr1, _, _, sw1 = dd1(
...
-            rd2, gr2, _, _, sw2 = dd2.call(
+            rd2, _gr2, _, _, sw2 = dd2.call(

Also applies to: 85-85

Comment on lines 37 to 42
features: ClassVar[Backend.Feature] = (
Backend.Feature.ENTRY_POINT
| Backend.Feature.DEEP_EVAL
| Backend.Feature.NEIGHBOR_STAT
| Backend.Feature.IO
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Feature flags declare capabilities that raise NotImplementedError.

DEEP_EVAL, NEIGHBOR_STAT, and IO are declared as supported features, but their corresponding properties (deep_eval, neighbor_stat, serialize_hook, deserialize_hook) raise NotImplementedError. If a caller checks backend.features & Feature.DEEP_EVAL and then accesses backend.deep_eval, it'll get an unexpected error. Consider either removing the feature flags until implemented, or adding a brief comment explaining the roadmap.

🤖 Prompt for AI Agents
In `@deepmd/backend/pt_expt.py` around lines 37 - 42, The class-level features
tuple declares Backend.Feature.DEEP_EVAL, NEIGHBOR_STAT, and IO but the
corresponding members deep_eval, neighbor_stat, serialize_hook and
deserialize_hook raise NotImplementedError; fix this by either removing those
unsupported flags from the features tuple in the class (update the features
variable to only include ENTRY_POINT and any truly supported flags) or keep the
flags but add a clear TODO/roadmap comment above the features declaration
referencing deep_eval, neighbor_stat, serialize_hook and deserialize_hook and a
target milestone for implementing them so callers won’t be misled; adjust the
features tuple or add the comment in pt_expt.py accordingly.

Comment on lines +876 to +896
@classmethod
def deserialize(cls, data: dict) -> "EmbeddingNet":
"""Deserialize the network from a dict.

Parameters
----------
data : dict
The dict to deserialize from.
"""
data = data.copy()
check_version_compatibility(data.pop("@version", 1), 2, 1)
data.pop("@class", None)
layers = data.pop("layers")
obj = cls(**data)
# Reinitialize layers from serialized data, using the same layer type
# that __init__ created (respects subclass overrides via MRO).
layer_type = type(obj.layers[0])
obj.layers = type(obj.layers)(
[layer_type.deserialize(layer) for layer in layers]
)
return obj
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

deserialize assumes obj.layers is non-empty (Line 892).

layer_type = type(obj.layers[0]) will raise IndexError if neuron is an empty list. While this is unlikely in practice, the factory-based deserialize (line 784–786) didn't have this assumption since it called super(EN, obj).__init__(layers) directly.

If neuron=[] is never valid, this is fine as-is. Otherwise, a guard would prevent a confusing traceback.

Defensive fix
         obj = cls(**data)
         # Reinitialize layers from serialized data, using the same layer type
         # that __init__ created (respects subclass overrides via MRO).
-        layer_type = type(obj.layers[0])
-        obj.layers = type(obj.layers)(
-            [layer_type.deserialize(layer) for layer in layers]
-        )
+        if obj.layers:
+            layer_type = type(obj.layers[0])
+        else:
+            layer_type = NativeLayer
+        obj.layers = type(obj.layers)(
+            [layer_type.deserialize(layer) for layer in layers]
+        )
🤖 Prompt for AI Agents
In `@deepmd/dpmodel/utils/network.py` around lines 876 - 896, The current
EmbeddingNet.deserialize assumes obj.layers is non-empty by doing layer_type =
type(obj.layers[0]); add a guard to handle empty obj.layers: if obj.layers is
non-empty keep the existing behavior (use layer_type = type(obj.layers[0]) and
deserialize each serialized layer with layer_type.deserialize), otherwise fall
back to per-dict deserialization by resolving each serialized layer's class
(e.g. via the layer dict's "@class" entry or a LayerBase.deserialize factory)
and create obj.layers from those deserialized instances; update the code in
deserialize to check if not obj.layers before computing layer_type and choose
the appropriate deserialization path.

Comment on lines 14 to 15
# Import network to ensure EmbeddingNet is registered before TypeEmbedNet is used
from deepmd.pt_expt.utils import network # noqa: F401
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check linter configuration
find . -maxdepth 2 \( -name "pyproject.toml" -o -name "setup.cfg" -o -name ".ruff.toml" -o -name "ruff.toml" \) -type f

Repository: deepmodeling/deepmd-kit

Length of output: 95


🏁 Script executed:

# Read the actual file to understand context
cat -n deepmd/pt_expt/utils/type_embed.py

Repository: deepmodeling/deepmd-kit

Length of output: 1679


🏁 Script executed:

# Check if F401 is mentioned in ruff or flake8 configs
rg -i "F401|extend-select|extend-ignore" --type toml --type cfg --type ini

Repository: deepmodeling/deepmd-kit

Length of output: 94


🏁 Script executed:

cat -n pyproject.toml | head -100

Repository: deepmodeling/deepmd-kit

Length of output: 3936


🏁 Script executed:

cat -n ruff.toml

Repository: deepmodeling/deepmd-kit

Length of output: 1183


🏁 Script executed:

# Check if F401 is explicitly mentioned in the configs
rg "F401" pyproject.toml ruff.toml

Repository: deepmodeling/deepmd-kit

Length of output: 190


Remove the unnecessary noqa: F401 directive since F401 is globally ignored in the project's Ruff configuration.

The side-effect import on line 15 is appropriate and well-documented. However, the project's ruff.toml explicitly ignores F401 globally, making the noqa: F401 comment unnecessary. This will also prevent RUF100 warnings about unused noqa directives.

Remove # noqa: F401 from line 15.

🧰 Tools
🪛 Ruff (0.14.14)

[warning] 15-15: Unused noqa directive (non-enabled: F401)

Remove unused noqa directive

(RUF100)

🤖 Prompt for AI Agents
In `@deepmd/pt_expt/utils/type_embed.py` around lines 14 - 15, Remove the
redundant noqa directive from the side-effect import: locate the import
statement that brings in deepmd.pt_expt.utils.network (the comment above it
explains it's to register EmbeddingNet) and delete the trailing "# noqa: F401"
so the import remains but the unnecessary noqa is removed; ensure the import
line still stays in place and no other changes are made to the comment or import
name.

if INSTALLED_PT:
from deepmd.pt.model.descriptor.se_r import DescrptSeR as DescrptSeRPT
else:
DescrptSeAPT = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Pre-existing bug: fallback variable name mismatch.

When INSTALLED_PT is False, line 30 sets DescrptSeAPT = None but pt_class on line 134 references DescrptSeRPT. If PT is not installed, this would cause a NameError at class definition time. The same issue exists on line 38 with DescrptSeATF vs DescrptSeRTF. This predates the current PR but is worth noting.

Suggested fix (pre-existing)
 if INSTALLED_PT:
     from deepmd.pt.model.descriptor.se_r import DescrptSeR as DescrptSeRPT
 else:
-    DescrptSeAPT = None
+    DescrptSeRPT = None
 if INSTALLED_TF:
     from deepmd.tf.descriptor.se_r import DescrptSeR as DescrptSeRTF
 else:
-    DescrptSeATF = None
+    DescrptSeRTF = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DescrptSeAPT = None
DescrptSeRPT = None
🤖 Prompt for AI Agents
In `@source/tests/consistent/descriptor/test_se_r.py` at line 30, The fallback
variable names are inconsistent: when INSTALLED_PT is False the diff sets
DescrptSeAPT and DescrptSeATF to None but other code (e.g., pt_class) expects
DescrptSeRPT and DescrptSeRTF; update the fallback assignments so the variables
referenced elsewhere are defined (set DescrptSeRPT = None and DescrptSeRTF =
None when INSTALLED_PT is False), or alternatively rename usages in pt_class to
match the existing fallback names, ensuring DescrptSeRPT and DescrptSeRTF exist
to avoid a NameError.

Comment on lines 149 to 159
type_embedding = torch.randn(
[self.nt, dd0.tebd_dim], dtype=dtype, device=self.device
)

inputs = (
torch.tensor(self.coord_ext, dtype=dtype, device=self.device),
torch.tensor(self.atype_ext, dtype=int, device=self.device),
torch.tensor(self.nlist, dtype=int, device=self.device),
type_embedding,
)
torch.export.export(dd0, inputs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the forward() signature of DescrptSeTTebd
echo "=== forward() signature and implementation ==="
fd 'se_t_tebd.py' --type f | grep -v test | head -5

Repository: deepmodeling/deepmd-kit

Length of output: 306


🏁 Script executed:

#!/bin/bash
# First, locate the se_t_tebd.py file (not the test)
find . -name 'se_t_tebd.py' -not -path '*/tests/*' | head -1

Repository: deepmodeling/deepmd-kit

Length of output: 107


🏁 Script executed:

#!/bin/bash
# List structure to understand where files are
ls -la source/pt_expt/ 2>/dev/null || ls -la source/ 2>/dev/null | head -20

Repository: deepmodeling/deepmd-kit

Length of output: 1165


🏁 Script executed:

#!/bin/bash
# Check the test file to see the exact code at lines 149-159
wc -l deepmd/tests/pt_expt/descriptor/test_se_t_tebd.py

Repository: deepmodeling/deepmd-kit

Length of output: 147


🏁 Script executed:

#!/bin/bash
# Read the test file around lines 149-159
sed -n '140,165p' deepmd/tests/pt_expt/descriptor/test_se_t_tebd.py

Repository: deepmodeling/deepmd-kit

Length of output: 159


🏁 Script executed:

#!/bin/bash
# Get forward signature from the descriptor
grep -n "def forward" deepmd/pt_expt/descriptor/se_t_tebd.py -A 10

Repository: deepmodeling/deepmd-kit

Length of output: 456


🏁 Script executed:

#!/bin/bash
# Check the full forward method to see parameter handling
sed -n '/def forward/,/^    def /p' deepmd/pt_expt/descriptor/se_t_tebd.py | head -50

Repository: deepmodeling/deepmd-kit

Length of output: 779


🏁 Script executed:

#!/bin/bash
# Find the test file
find . -name 'test_se_t_tebd.py' -type f

Repository: deepmodeling/deepmd-kit

Length of output: 173


🏁 Script executed:

#!/bin/bash
# List all test files in pt_expt descriptor
find . -path '*pt_expt*descriptor*test*.py' -type f

Repository: deepmodeling/deepmd-kit

Length of output: 262


🏁 Script executed:

#!/bin/bash
# Read the test file around lines 149-159
wc -l ./source/tests/pt_expt/descriptor/test_se_t_tebd.py

Repository: deepmodeling/deepmd-kit

Length of output: 122


🏁 Script executed:

#!/bin/bash
# Read around lines 149-159
sed -n '140,165p' ./source/tests/pt_expt/descriptor/test_se_t_tebd.py

Repository: deepmodeling/deepmd-kit

Length of output: 871


🏁 Script executed:

#!/bin/bash
# Get context - find the test_exportable method
grep -n "def test_exportable" ./source/tests/pt_expt/descriptor/test_se_t_tebd.py -A 30

Repository: deepmodeling/deepmd-kit

Length of output: 1288


🏁 Script executed:

#!/bin/bash
# Get more lines to see the complete inputs tuple
sed -n '153,162p' ./source/tests/pt_expt/descriptor/test_se_t_tebd.py

Repository: deepmodeling/deepmd-kit

Length of output: 409


🏁 Script executed:

#!/bin/bash
# Also check if there's any usage of type_embedding elsewhere in the forward or call method
grep -n "type_embedding" deepmd/pt_expt/descriptor/se_t_tebd.py -C 3

Repository: deepmodeling/deepmd-kit

Length of output: 589


Positional argument mismatch: type_embedding is bound to extended_atype_embd parameter.

In test_exportable, the inputs tuple passes type_embedding as the 4th positional argument. The forward signature is:

def forward(self, extended_coord, extended_atype, nlist, extended_atype_embd=None, mapping=None, type_embedding=None)

So the 4th positional arg binds to extended_atype_embd, not type_embedding. Since forward deletes all optional params (del extended_atype_embd, mapping, type_embedding), this doesn't cause a runtime error — but the test silently exports with type_embedding=None, which may not be the intended export behavior.

If the intent is to export with type_embedding, pass None placeholders for the intervening parameters or use kwargs with torch.export.export.

Proposed fix (if type_embedding should be passed)
             inputs = (
                 torch.tensor(self.coord_ext, dtype=dtype, device=self.device),
                 torch.tensor(self.atype_ext, dtype=int, device=self.device),
                 torch.tensor(self.nlist, dtype=int, device=self.device),
+                None,  # extended_atype_embd
+                None,  # mapping
                 type_embedding,
             )
🤖 Prompt for AI Agents
In `@source/tests/pt_expt/descriptor/test_se_t_tebd.py` around lines 149 - 159,
The test constructs an inputs tuple that places type_embedding as the 4th
positional argument, which binds to forward's extended_atype_embd (signature:
forward(self, extended_coord, extended_atype, nlist, extended_atype_embd=None,
mapping=None, type_embedding=None)), causing type_embedding to be exported as
None; update the call that exports dd0 so type_embedding is passed into the
correct parameter—either insert explicit None placeholders for
extended_atype_embd and mapping before type_embedding in the inputs tuple, or
switch to exporting with keyword args (e.g., pass type_embedding=...) when
calling torch.export.export(dd0, ...), ensuring dd0.forward receives
type_embedding correctly.

Comment on lines +162 to +164
# Test forward pass
rng = np.random.default_rng()
x_np = rng.standard_normal((5, self.in_dim))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Non-deterministic RNG may cause flaky test failures.

np.random.default_rng() is called without a seed, making this test non-reproducible. Every other test in this file uses GLOBAL_SEED. If a numerical tolerance mismatch occurs, it won't be reproducible.

Proposed fix
-        rng = np.random.default_rng()
+        rng = np.random.default_rng(GLOBAL_SEED)
🤖 Prompt for AI Agents
In `@source/tests/pt_expt/utils/test_network.py` around lines 162 - 164, The test
uses an unseeded RNG (rng = np.random.default_rng()) which makes the test
non-deterministic; change the RNG initialization to use the shared GLOBAL_SEED
(e.g., rng = np.random.default_rng(GLOBAL_SEED)) so x_np is reproducible, ensure
GLOBAL_SEED is imported/available in this test module, and keep the subsequent
x_np = rng.standard_normal((5, self.in_dim)) call unchanged.

@wanghan-iapcm wanghan-iapcm deleted the feat-se-t branch February 8, 2026 15:51
@codecov
Copy link

codecov bot commented Feb 8, 2026

Codecov Report

❌ Patch coverage is 97.70115% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 81.87%. Comparing base (5c2ca51) to head (2804070).

Files with missing lines Patch % Lines
deepmd/pt_expt/common.py 92.85% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5206      +/-   ##
==========================================
- Coverage   81.99%   81.87%   -0.12%     
==========================================
  Files         724      724              
  Lines       73807    73828      +21     
  Branches     3616     3615       -1     
==========================================
- Hits        60519    60448      -71     
- Misses      12124    12217      +93     
+ Partials     1164     1163       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant