Skip to content

refact (pt_expt): provide infrastructure for converting dpmodel classes to PyTorch modules. #5204

Queued
wanghan-iapcm wants to merge 34 commits intodeepmodeling:masterfrom
wanghan-iapcm:refact-auto-setattr
Queued

refact (pt_expt): provide infrastructure for converting dpmodel classes to PyTorch modules. #5204
wanghan-iapcm wants to merge 34 commits intodeepmodeling:masterfrom
wanghan-iapcm:refact-auto-setattr

Conversation

@wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Feb 8, 2026

consider after the merge of #5194

automatically wrapping dpmodel classes (array_api_compat-based) as PyTorch modules. The key insight is to detect attributes by their value type rather than by hard-coded names.

Summary by CodeRabbit

  • New Features
    • Registry-driven conversion for DP objects to PyTorch modules enabling automatic wrapper creation.
    • New PyTorch-friendly descriptor variants with stable forward outputs for se_e2_a and se_r.
    • PyTorch-wrapped exclude-mask utilities and a NetworkCollection of wrapped network types for proper module/state handling.
    • Device-aware tensor conversion and robust handling of numpy buffers and None-valued buffers for reliable serialization/movement.

@wanghan-iapcm wanghan-iapcm requested a review from njzjz February 8, 2026 13:21
@github-actions github-actions bot added the Python label Feb 8, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @wanghan-iapcm, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes a foundational framework for integrating dpmodel classes with PyTorch's export capabilities. By introducing a new pt-expt backend and a smart attribute wrapping mechanism, it streamlines the process of converting dpmodel components into exportable PyTorch modules. The changes also include crucial adjustments for device handling and robust threading configurations, ensuring stability and compatibility across the system.

Highlights

  • New PyTorch Exportable Backend: Introduced a new backend, pt-expt (PyTorch Exportable), to support the conversion of dpmodel classes into PyTorch modules, enabling torch.export compatibility.
  • Automated Attribute Wrapping: Implemented a core infrastructure that automatically wraps dpmodel attributes (like NumPy arrays and nested dpmodel objects) into PyTorch buffers and modules, respectively, based on their value type. This reduces the need for hard-coded attribute names and improves maintainability.
  • Device and Threading Enhancements: Updated dpmodel descriptor classes to ensure correct device placement for arrays during operations and improved PyTorch threading configuration by adding guards to prevent RuntimeError when setting inter-op and intra-op threads multiple times.
  • Expanded Test Coverage: Added comprehensive test cases for the new pt-expt backend, including consistency checks against reference backends and self-consistency tests, as well as specific tests for descriptor and utility modules to ensure proper functionality and exportability.
Changelog
  • deepmd/backend/pt_expt.py
    • Added PyTorchExportableBackend class, registering 'pt-expt' and 'pytorch-exportable' backends.
  • deepmd/dpmodel/common.py
    • Modified to_numpy_array to catch TypeError and explicitly move arrays to 'cpu' device before conversion to NumPy, resolving potential BufferError.
  • deepmd/dpmodel/descriptor/dpa1.py
    • Added device argument to xp.asarray calls in compute_input_stats for explicit device placement.
  • deepmd/dpmodel/descriptor/repflows.py
    • Added device argument to xp.asarray calls in compute_input_stats for explicit device placement.
  • deepmd/dpmodel/descriptor/repformers.py
    • Added device argument to xp.asarray calls in compute_input_stats for explicit device placement.
  • deepmd/dpmodel/descriptor/se_e2_a.py
    • Added device argument to xp.asarray calls in compute_input_stats and xp.zeros call in call for explicit device placement.
  • deepmd/dpmodel/descriptor/se_r.py
    • Added device argument to xp.asarray calls in compute_input_stats and xp.zeros call in call for explicit device placement.
  • deepmd/dpmodel/descriptor/se_t.py
    • Added device argument to xp.asarray calls in compute_input_stats for explicit device placement.
  • deepmd/dpmodel/descriptor/se_t_tebd.py
    • Added device argument to xp.asarray calls in compute_input_stats for explicit device placement.
  • deepmd/env.py
    • Corrected environment variable lookup for inter-op parallelism threads from TF_INTRA_OP_PARALLELISM_THREADS to TF_INTER_OP_PARALLELISM_THREADS.
  • deepmd/pt/utils/env.py
    • Added try-except blocks and checks to torch.set_num_interop_threads and torch.set_num_threads to prevent RuntimeError if called multiple times and log warnings instead.
  • deepmd/pt_expt/init.py
    • Added new __init__.py file for the pt_expt package.
  • deepmd/pt_expt/common.py
    • Introduced _DPMODEL_TO_PT_EXPT registry, register_dpmodel_mapping, try_convert_module, and dpmodel_setattr for type-based automatic attribute conversion and PyTorch module wrapping.
    • Added to_torch_array utility function for converting various array types to PyTorch tensors on the pt_expt device.
  • deepmd/pt_expt/descriptor/init.py
    • Added new __init__.py file for the pt_expt.descriptor package, exposing BaseDescriptor, DescrptSeA, and DescrptSeR.
  • deepmd/pt_expt/descriptor/base_descriptor.py
    • Defined BaseDescriptor for pt_expt by adapting dpmodel.descriptor.make_base_descriptor for PyTorch tensors.
  • deepmd/pt_expt/descriptor/se_e2_a.py
    • Added DescrptSeA PyTorch exportable wrapper for dpmodel.descriptor.se_e2_a.DescrptSeAArrayAPI, inheriting from torch.nn.Module and utilizing dpmodel_setattr.
  • deepmd/pt_expt/descriptor/se_r.py
    • Added DescrptSeR PyTorch exportable wrapper for dpmodel.descriptor.se_r.DescrptSeR, inheriting from torch.nn.Module and utilizing dpmodel_setattr.
  • deepmd/pt_expt/utils/init.py
    • Added new __init__.py file for the pt_expt.utils package, exposing AtomExcludeMask, PairExcludeMask, and NetworkCollection.
  • deepmd/pt_expt/utils/env.py
    • Added environment configuration for the pt_expt backend, including device selection, precision mappings, and guarded thread setting logic similar to deepmd.pt.utils.env.py.
  • deepmd/pt_expt/utils/exclude_mask.py
    • Added AtomExcludeMask and PairExcludeMask PyTorch exportable wrappers for dpmodel exclude masks, registering them for automatic conversion.
  • deepmd/pt_expt/utils/network.py
    • Added PyTorch exportable wrappers for network components (TorchArrayParam, NativeLayer, NativeNet, EmbeddingNet, FittingNet, NetworkCollection, LayerNorm), handling parameter/buffer registration and dpmodel_setattr.
  • pyproject.toml
    • Updated banned-module-level-imports to include deepmd.pt_expt.
    • Updated per-file-ignores to include deepmd/pt_expt/** and source/tests/pt_expt/** for specific linting rules.
  • source/tests/consistent/common.py
    • Added INSTALLED_PT_EXPT flag and pt_expt_class attribute to CommonTest.
    • Introduced skip_pt_expt property and eval_pt_expt method for PyTorch exportable tests.
    • Added RefBackend.PT_EXPT to the RefBackend enum.
    • Added get_pt_expt_ret_serialization_from_cls method for serialization testing.
    • Updated get_reference_backend and get_reference_ret_serialization to include the new PT_EXPT backend.
    • Added test_pt_expt_consistent_with_ref and test_pt_expt_self_consistent test methods.
  • source/tests/consistent/descriptor/common.py
    • Modified INSTALLED_PT check to include INSTALLED_PT_EXPT for torch imports.
    • Added eval_pt_expt_descriptor method for evaluating PyTorch exportable descriptors.
  • source/tests/consistent/descriptor/test_se_e2_a.py
    • Updated INSTALLED_PT check to include INSTALLED_PT_EXPT.
    • Added DescrptSeAPTExpt import and assigned it to pt_expt_class.
    • Added skip_pt_expt property and eval_pt_expt method for DescrptSeA tests.
  • source/tests/consistent/descriptor/test_se_r.py
    • Updated INSTALLED_PT check to include INSTALLED_PT_EXPT.
    • Added DescrptSeRPTExpt import and assigned it to pt_expt_class.
    • Added skip_pt_expt property and eval_pt_expt method for DescrptSeR tests.
  • source/tests/pt/test_env_threads.py
    • Added new test file to verify the robustness of thread setting guards in deepmd.pt.utils.env.
  • source/tests/pt_expt/init.py
    • Added new __init__.py file for the pt_expt test package.
  • source/tests/pt_expt/conftest.py
    • Added pytest.importorskip("torch") to ensure PyTorch is available for pt_expt tests.
  • source/tests/pt_expt/descriptor/init.py
    • Added new __init__.py file for the pt_expt.descriptor test package.
  • source/tests/pt_expt/descriptor/test_se_e2_a.py
    • Added new test file for pt_expt.descriptor.se_e2_a.DescrptSeA, including consistency and exportability tests.
  • source/tests/pt_expt/descriptor/test_se_r.py
    • Added new test file for pt_expt.descriptor.se_r.DescrptSeR, including consistency and exportability tests.
  • source/tests/pt_expt/model/init.py
    • Added new __init__.py file for the pt_expt.model test package.
  • source/tests/pt_expt/utils/init.py
    • Added new __init__.py file for the pt_expt.utils test package.
  • source/tests/pt_expt/utils/test_common.py
    • Added new test file for deepmd.pt_expt.common.to_torch_array to verify device movement.
  • source/tests/pt_expt/utils/test_env.py
    • Added new test file to verify the robustness of thread setting guards in deepmd.pt_expt.utils.env.
  • source/tests/pt_expt/utils/test_exclusion_mask.py
    • Added new test file for deepmd.pt_expt.utils.exclude_mask.AtomExcludeMask and PairExcludeMask.
  • source/tests/pt_expt/utils/test_network.py
    • Added new test file for deepmd.pt_expt.utils.network.NativeLayer to verify parameter/buffer clearing.
Activity
  • The pull request introduces a new PyTorch exportable backend, pt-expt, and a comprehensive set of infrastructure to support it.
  • Existing dpmodel descriptor classes have been adapted to work seamlessly with the new PyTorch exportable modules, including explicit device handling.
  • Improvements were made to thread configuration logic in both the standard PyTorch backend and the new exportable backend to prevent runtime errors.
  • Extensive new test files and modifications to existing test frameworks have been added to validate the consistency and exportability of the new PyTorch modules.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the new feature label Feb 8, 2026
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and well-designed infrastructure for a new "PyTorch Exportable" (pt-expt) backend. The core idea of automatically wrapping dpmodel classes into torch.nn.Modules by detecting attribute types is clever and should improve maintainability. The changes are extensive, touching not only the new backend implementation but also core dpmodel components for better device consistency and the testing framework for validation.

My review has identified two important bug fixes: one related to environment variable handling for parallelism and another related to thread assignment in the PyTorch environment. I've also pointed out an opportunity to reduce code duplication between the pt and the new pt_expt environment setup modules, which would improve long-term maintainability.

Overall, this is a high-quality contribution that lays a solid foundation for making dpmodels compatible with torch.export.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

📝 Walkthrough

Walkthrough

Introduces a registry-driven dpmodel→PyTorch converter, device-aware array conversion and unified setattr flow, PyTorch wrapper descriptor classes (DescrptSeA, DescrptSeR), exclude-mask wrappers, and PyTorch-compatible network and NetworkCollection wrappers.

Changes

Cohort / File(s) Summary
Core Infrastructure
deepmd/pt_expt/common.py
Adds dpmodel→pt_expt registry (register_dpmodel_mapping), converter lookup (try_convert_module), unified setattr handler (dpmodel_setattr), and device-aware to_torch_array with overloads; defers env import and uses NativeOP.
Descriptor Wrappers
deepmd/pt_expt/descriptor/se_e2_a.py, deepmd/pt_expt/descriptor/se_r.py
New PyTorch-exportable descriptor wrappers (DescrptSeA, DescrptSeR) combining dpmodel bases and torch.nn.Module, registered identifiers, custom __setattr__ via dpmodel_setattr, and forward delegating to underlying call.
Exclude Mask Wrappers
deepmd/pt_expt/utils/exclude_mask.py, deepmd/pt_expt/utils/__init__.py
Adds AtomExcludeMask and PairExcludeMask PyTorch wrappers (dpmodel + torch.nn.Module), registers converters to build wrappers from dpmodel objects, and exposes NetworkCollection along with masks in utils exports.
Network System
deepmd/pt_expt/utils/network.py
Adds TorchArrayParam, NativeLayer, NativeNet, EmbeddingNet, FittingNet, LayerNorm, and NetworkCollection (ModuleDict/ModuleList-based); integrates dpmodel mapping registration and dpmodel-aware attribute handling for parameters/state.
Package Exports
deepmd/pt_expt/utils/__init__.py
Re-exports AtomExcludeMask, PairExcludeMask, and NetworkCollection, and registers EnvMat identity converter.

Sequence Diagram

sequenceDiagram
    participant User
    participant Registry as Registry System
    participant Converter
    participant DPModel as DPModel Object
    participant Wrapper as PT_EXPT Wrapper
    participant TorchModule as torch.nn.Module

    User->>Registry: register_dpmodel_mapping(dpmodel_cls, converter)
    Registry->>Registry: Store mapping

    User->>Wrapper: try_convert_module(dpmodel_obj)
    Wrapper->>Registry: Lookup converter for dpmodel_obj.__class__
    alt Mapping found
        Registry-->>Wrapper: converter callable
        Wrapper->>Converter: converter(dpmodel_obj)
        Converter->>DPModel: Read config/state (ntypes, arrays, etc.)
        Converter->>Wrapper: Instantiate wrapper with converted args
        Wrapper->>TorchModule: Initialize torch.nn.Module base
        Wrapper-->>User: Return pt_expt wrapper instance
    else No mapping
        Registry-->>Wrapper: None
        Wrapper-->>User: Return None
    end

    User->>Wrapper: setattr(wrapper, name, value)
    Wrapper->>Wrapper: dpmodel_setattr(name, value)
    alt value is numpy array
        Wrapper->>Wrapper: to_torch_array(value) -> Tensor
        Wrapper->>TorchModule: register_buffer / set parameter
    else value is dpmodel object
        Wrapper->>Registry: try_convert_module(value)
        Registry-->>Wrapper: converted wrapper or None
        Wrapper->>TorchModule: set attribute to converted module or original
    else other
        Wrapper->>TorchModule: normal setattr
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • njzjz
  • iProzd
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 34.91% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: introducing infrastructure for converting dpmodel classes to PyTorch modules, which is the core purpose of this PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
deepmd/pt_expt/utils/__init__.py (1)

18-20: Identity converter returns a NativeOP, not a torch.nn.Module — type mismatch with the registry's declared signature.

register_dpmodel_mapping declares converter: Callable[[NativeOP], torch.nn.Module], but lambda v: v returns the original EnvMat (a NativeOP). This works at runtime because dpmodel_setattr passes the result back to super().__setattr__ which stores it as a plain attribute. However, it silently violates the type contract and could confuse future maintainers or type-checkers.

Consider either:

  • Widening the converter return type to torch.nn.Module | Any to acknowledge passthrough converters, or
  • Adding a dedicated "skip" registration (e.g., register_dpmodel_passthrough(EnvMat)) that makes the intent explicit.
deepmd/pt_expt/common.py (5)

204-210: Buffer registration during __init__ can silently promote any numpy array to a persistent buffer.

Every self.xxx = np.array(...) in the dpmodel __init__ (after torch.nn.Module.__init__()) will auto-register a buffer. If a dpmodel class ever adds a temporary numpy array as an instance attribute (e.g., a cache or scratch space), it would silently become a persistent buffer in the state dict. This is safe today given the dpmodel convention documented in the docstring, but it's worth noting the implicit contract: dpmodel classes must never store transient numpy arrays as instance attributes.


217-231: Exact-type registry lookup won't match subclasses of registered dpmodel classes.

try_convert_module uses type(value) (line 118), so a subclass of a registered dpmodel class will miss the converter and hit the TypeError on line 226. This is documented as intentional, but if dpmodel classes are ever subclassed (e.g., for testing or specialization), this will be a surprising failure. Consider falling back to an MRO walk if exact match fails:

♻️ Optional: MRO fallback
 def try_convert_module(value: Any) -> torch.nn.Module | None:
     converter = _DPMODEL_TO_PT_EXPT.get(type(value))
+    if converter is None:
+        for cls in type(value).__mro__[1:]:
+            converter = _DPMODEL_TO_PT_EXPT.get(cls)
+            if converter is not None:
+                break
     if converter is not None:
         return converter(value)
     return None

224-231: Move long message to a custom exception class (static analysis hint).

Ruff TRY003 flags the inline message. A small UnregisteredDPModelError would also make it easier for callers to catch programmatically.

♻️ Suggested refactor
+class UnregisteredDPModelError(TypeError):
+    """Raised when a dpmodel NativeOP is assigned but has no registered converter."""
+    def __init__(self, cls_name: str):
+        super().__init__(
+            f"Attempted to assign a dpmodel object of type {cls_name} "
+            f"but no converter is registered. Please call register_dpmodel_mapping "
+            f"for this type. If this object doesn't need conversion, register it "
+            f"with an identity converter: lambda v: v"
+        )

 ...
-            if isinstance(value, NativeOP):
-                raise TypeError(
-                    f"Attempted to assign a dpmodel object of type {type(value).__name__} "
-                    ...
-                )
+            if isinstance(value, NativeOP):
+                raise UnregisteredDPModelError(type(value).__name__)

304-304: Remove unused # noqa directive.

Ruff reports # noqa: F401 is unnecessary here (rule not enabled). Remove it to keep the codebase clean.

🧹 Suggested fix
-    from deepmd.pt_expt import utils  # noqa: F401
+    from deepmd.pt_expt import utils

297-307: Module-level _ensure_registrations() tightly couples common.py to the utils package.

Importing common.py (even just for to_torch_array) always triggers the full utils import chain. This is fine for the current use case but adds hidden import-time cost and makes it harder to use common.py in isolation (e.g., in tests). If this ever becomes a concern, lazy initialization (guarded by a flag) could defer the cost.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@deepmd/backend/pt_expt.py`:
- Around line 37-42: The declared features flag in pt_expt.py advertises
deep_eval, neighbor_stat and IO but the corresponding properties deep_eval,
neighbor_stat, serialize_hook and deserialize_hook raise NotImplementedError,
causing callers using Backend.get_backends_by_feature() (e.g.
entrypoints/neighbor_stat.py and entrypoints/main.py) to crash; to fix, change
the ClassVar features tuple on the Backend subclass to only include
Backend.Feature.ENTRY_POINT until you implement those hooks, or alternatively
implement working deep_eval, neighbor_stat, serialize_hook and deserialize_hook
methods that return the expected values; reference the symbols features,
deep_eval, neighbor_stat, serialize_hook, deserialize_hook and
Backend.get_backends_by_feature() when making the change.

In `@source/tests/consistent/descriptor/test_se_e2_a.py`:
- Around line 546-566: The eval_pt_expt method references env.DEVICE (in
eval_pt_expt) but env is only imported under the INSTALLED_PT guard; import the
correct env used by pt_expt to avoid NameError when INSTALLED_PT_EXPT is true
but INSTALLED_PT is false by adding an import from deepmd.pt_expt.utils.env (or
by importing env inside eval_pt_expt) so eval_pt_expt can always access
env.DEVICE; locate the eval_pt_expt function and update imports or add a local
import to ensure env is defined when this test runs.
🧹 Nitpick comments (6)
deepmd/pt_expt/utils/env.py (2)

1-127: Near-complete duplication of deepmd/pt/utils/env.py.

This file is virtually identical to deepmd/pt/utils/env.py — same constants, same device logic, same precision dicts, same thread guards. The only difference is the comment on line 97. When a constant or guard needs to change, both files must be updated in lockstep.

Consider extracting the shared logic into a common helper (e.g., deepmd/pt_common/env.py or a shared setup function) that both pt and pt_expt env modules call. Each backend module can then layer on its own specifics.


19-20: import torch is separated from other top-level imports.

The torch import sits after the deepmd.env block rather than grouped with numpy at line 7. This appears to be an oversight from code movement.

Suggested fix
 import numpy as np
+import torch
 
 from deepmd.common import (
     VALID_PRECISION,
@@ -17,7 +18,6 @@
 )
 
 log = logging.getLogger(__name__)
-import torch
deepmd/pt_expt/utils/network.py (1)

27-37: TorchArrayParam — clean custom Parameter subclass.

The __array__ protocol implementation correctly detaches, moves to CPU, and converts to numpy, which enables seamless interop with dpmodel code that expects numpy arrays.

One minor note from static analysis: the # noqa: PYI034 on line 28 is flagged as unused by Ruff. It can be removed.

Suggested fix
-    def __new__(  # noqa: PYI034
+    def __new__(
         cls, data: Any = None, requires_grad: bool = True
     ) -> "TorchArrayParam":
deepmd/pt_expt/common.py (2)

45-82: register_dpmodel_mapping silently overwrites existing entries.

If a dpmodel class is registered twice (e.g., due to module re-import or conflicting registrations), the second call silently replaces the first converter with no warning. This could cause hard-to-debug issues during development.

Consider adding a warning or raising on duplicate registration:

Suggested guard
     """
+    if dpmodel_cls in _DPMODEL_TO_PT_EXPT:
+        import logging
+        logging.getLogger(__name__).warning(
+            f"Overwriting existing pt_expt mapping for {dpmodel_cls.__name__}"
+        )
     _DPMODEL_TO_PT_EXPT[dpmodel_cls] = converter

237-278: to_torch_array: torch.as_tensor shares memory with numpy arrays — document this intent or use torch.tensor if a copy is desired.

torch.as_tensor on line 278 shares memory with the source numpy array when possible (same dtype, CPU). If the caller later mutates the original array, the tensor changes too. The dpmodel convention (replace, don't mutate) makes this safe in practice, but it's a latent footgun for users of this utility who may not share that assumption. The docstring could note this behavior.

source/tests/pt/test_env_threads.py (1)

10-34: Near-identical test exists in source/tests/pt_expt/utils/test_env.py.

Both tests share the same monkeypatch/reload/assert pattern, differing only in the target env module. Consider extracting a shared helper parameterized by the module path to reduce duplication, though this is minor given the test's brevity.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3452a2a8c0

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@codecov
Copy link

codecov bot commented Feb 8, 2026

Codecov Report

❌ Patch coverage is 94.73684% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.00%. Comparing base (5c2ca51) to head (55e094e).
⚠️ Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
deepmd/pt_expt/common.py 91.42% 3 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master    #5204   +/-   ##
=======================================
  Coverage   81.99%   82.00%           
=======================================
  Files         724      724           
  Lines       73807    73801    -6     
  Branches     3616     3615    -1     
=======================================
+ Hits        60519    60520    +1     
+ Misses      12124    12118    -6     
+ Partials     1164     1163    -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@wanghan-iapcm wanghan-iapcm changed the title refact: provide infrastructure for converting dpmodel classes to PyTorch modules. refact (pt_expt): provide infrastructure for converting dpmodel classes to PyTorch modules. Feb 8, 2026
@wanghan-iapcm wanghan-iapcm added the Test CUDA Trigger test CUDA workflow label Feb 8, 2026
@github-actions github-actions bot removed the Test CUDA Trigger test CUDA workflow label Feb 8, 2026
@wanghan-iapcm wanghan-iapcm requested a review from njzjz February 9, 2026 11:54
@njzjz njzjz added this pull request to the merge queue Feb 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants