Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@
+-----------------------+

.. |Documentation| image:: https://img.shields.io/badge/api-reference-blue.svg
:target: https://arpes-v4.readthedocs.io/en/daredevil/
:target: https://arpes-corrected.readthedocs.io/en/latest/

|coverage| |docs_status| |code_format| |code style| |uv|


.. |docs_status| image:: https://readthedocs.org/projects/arpes-v4/badge/?version=stable&style=flat
:target: https://arpes-v4.readthedocs.io/en/stable/
.. |docs_status| image:: https://readthedocs.org/projects/arpes-corrected/badge/?version=stable&style=flat
:target: https://arpes-corrected.readthedocs.io/en/latest/
.. |coverage| image:: https://codecov.io/gh/arafune/arpes/graph/badge.svg?token=TW9EPVB1VE
:target: https://app.codecov.io/gh/arafune/arpes
.. |code style| image:: https://img.shields.io/badge/code%20style-black-000000.svg
Expand Down Expand Up @@ -155,7 +155,7 @@ PyArpes contribution after `cadaaae`_, |copy| 2023-2025 by Ryuichi Arafune, all

.. _cadaaae: https://github.com/arafune/arpes/commit/cadaaae0525d0889ef030cf18cf049da8fec2ee3
.. _Jupyter: https://jupyter.org/
.. _the documentation site: https://arpes-v4.readthedocs.io/en/daredevil
.. _contributing: https://arpes-v4.readthedocs.io/en/daredevil/contributing.html
.. _FAQ: https://arpes-v4.readthedocs.io/en/daredevil/faq.html
.. _the documentation site: https://arpes-corrected.readthedocs.io/en/latest
.. _contributing: https://arpes-corrected.readthedocs.io/en/latest/contributing.html
.. _FAQ: https://arpes-corrected.readthedocs.io/en/latest/faq.html

4 changes: 4 additions & 0 deletions docs/source/CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@ Primary (X.-.-) version numbers are used to denote backwards
incompatibilities between versions, while minor (-.X.-) numbers
primarily indicate new features and documentation.

5.0.3 (2026-XX-XX)
^^^^^^^^^^^^^^^^^^


5.0.2 (2025-12-23)
^^^^^^^^^^^^^^^^^^

Expand Down
3 changes: 1 addition & 2 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Small-Angle Approximated and Volumetric Related
.. autosummary::
:toctree: generated/

utilities.conversion.core.convert_to_kspace
utilities.conversion.api.convert_to_kspace
analysis.forward_conversion.convert_coordinate_forward
analysis.forward_conversion.convert_through_angular_point
analysis.forward_conversion.convert_through_angular_pair
Expand All @@ -163,7 +163,6 @@ Utilities

utilities.conversion.fast_interp.Interpolator
utilities.conversion.bounds_calculations.full_angles_to_k
utilities.conversion.remap_manipulator.remap_coords_to

Conversion Implementations
~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ Gettinng started
================

See the section on the docs site about
`contributing <https://arpes-v4.readthedocs.io/contributing>`__ for
`contributing <https://arpes-corrected.readthedocs.io/contributing>`__ for
information on adding to PyARPES and rebuilding documentation from
source.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Additional Suggested Steps
1. Install and configure standard tools like
`Jupyter <https://jupyter.org/>`__ or `Jupyter Lab <https://jupyterlab.readthedocs.io/en/latest>`__.
2. Explore the documentation and example notebooks at
`the documentation site <https://arpes-v4.readthedocs.io/en/daredevil/>`__.
`the documentation site <https://arpes-corrected.readthedocs.io/en/latest/>`__.

Barebones kernel installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion docs/source/notebooks/custom-dot-s-functionality.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
"version": "3.12.12"
}
},
"nbformat": 4,
Expand Down
5 changes: 3 additions & 2 deletions docs/source/spectra.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,9 @@ otherwise recorded by default.
Units
~~~~~

Spatial and angular coordinates are reported in millimeters and radians
respectively. Temperatures are everywhere recorded in Kelvin. Relative
Spatial coordinates are reported in millimeters. Angular coordinates are reported in radians or degrees.
(Radians is the default unit, due to the historical reason.) Within a single Dataset/DataArray, the angular units must be consistent.
Temperatures are everywhere recorded in Kelvin. Relative
times are reported in seconds. Currents are recorded in nanoamp unit.
Pressures are recorded in torr. Potentials are recorded in volts. Laser
pulse durations and other pump-probe quantities are reported in
Expand Down
28 changes: 14 additions & 14 deletions src/arpes/_typing/attrs_property.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,15 +172,15 @@ class DAQInfo(TypedDict, total=False):
class Coordinates(TypedDict, total=False):
"""TypedDict for attrs."""

x: NDArray[np.float64] | float
y: NDArray[np.float64] | float
z: NDArray[np.float64] | float
alpha: NDArray[np.float64] | float
beta: NDArray[np.float64] | float
chi: NDArray[np.float64] | float
theta: NDArray[np.float64] | float
psi: NDArray[np.float64] | float
phi: NDArray[np.float64] | float
x: NDArray[np.floating] | float
y: NDArray[np.floating] | float
z: NDArray[np.floating] | float
alpha: NDArray[np.floating] | float
beta: NDArray[np.floating] | float
chi: NDArray[np.floating] | float
theta: NDArray[np.floating] | float
psi: NDArray[np.floating] | float
phi: NDArray[np.floating] | float


class Spectrometer(AnalyzerInfo, Coordinates, DAQInfo, total=False):
Expand Down Expand Up @@ -215,11 +215,11 @@ class ARPESAttrs(Spectrometer, LightSourceInfo, SampleInfo, total=False):


class KspaceCoords(TypedDict, total=False):
eV: NDArray[np.float64]
kp: NDArray[np.float64]
kx: NDArray[np.float64]
ky: NDArray[np.float64]
kz: NDArray[np.float64]
eV: NDArray[np.floating]
kp: NDArray[np.floating]
kx: NDArray[np.floating]
ky: NDArray[np.floating]
kz: NDArray[np.floating]


CoordsOffset: TypeAlias = Literal[
Expand Down
2 changes: 1 addition & 1 deletion src/arpes/_typing/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@

AnalysisRegion = Literal["copper_prior", "wide_angular", "narrow_angular"]

SelType = float | str | slice | list[float | str] | NDArray[np.float64]
SelType = float | str | slice | list[float | str] | NDArray[np.floating]
12 changes: 6 additions & 6 deletions src/arpes/_typing/plotting.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@


class Line2DProperty(TypedDict, total=False):
agg_filter: Callable[[NDArray[np.float64], int], tuple[NDArray[np.float64], int, int]]
agg_filter: Callable[[NDArray[np.floating], int], tuple[NDArray[np.floating], int, int]]
alpha: float | None
animated: bool
antialiased: bool | list[bool]
Expand Down Expand Up @@ -119,15 +119,15 @@ class PolyCollectionProperty(Line2DProperty, total=False):
norm: Normalize | None
offset_transform: Transform
# offsets: (N, 2) or (2, ) array-like
sizes: NDArray[np.float64] | None
sizes: NDArray[np.floating] | None
transform: Transform
urls: list[str] | None


class MPLPlotKwargsBasic(TypedDict, total=False):
"""Kwargs for Axes.plot & Axes.fill_between."""

agg_filter: Callable[[NDArray[np.float64], int], tuple[NDArray[np.float64], int, int]]
agg_filter: Callable[[NDArray[np.floating], int], tuple[NDArray[np.floating], int, int]]
alpha: float | None
animated: bool
antialiased: bool | list[bool]
Expand Down Expand Up @@ -183,8 +183,8 @@ class MPLPlotKwargs(MPLPlotKwargsBasic, total=False):
randomness: float
solid_capstyle: CapStyleType
solid_joinstyle: JoinStyleType
xdata: NDArray[np.float64]
ydata: NDArray[np.float64]
xdata: NDArray[np.floating]
ydata: NDArray[np.floating]
zorder: float


Expand Down Expand Up @@ -247,7 +247,7 @@ class ColorbarParam(TypedDict, total=False):


class MPLTextParam(TypedDict, total=False):
agg_filter: Callable[[NDArray[np.float64], int], tuple[NDArray[np.float64], int, int]]
agg_filter: Callable[[NDArray[np.floating], int], tuple[NDArray[np.floating], int, int]]
alpha: float | None
animated: bool
antialiased: bool
Expand Down
4 changes: 2 additions & 2 deletions src/arpes/_typing/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,12 @@ def flatten_literals(literal_type: Incomplete) -> set[str]:


def is_dict_kspacecoords(
a_dict: dict[Hashable, NDArray[np.float64]] | dict[str, NDArray[np.float64]],
a_dict: dict[Hashable, NDArray[np.floating]] | dict[str, NDArray[np.floating]],
) -> TypeGuard[KspaceCoords]:
"""Checks if a dictionary contains k-space coordinates.

Args:
a_dict (dict[Hashable, NDArray[np.float64]] | dict[str, NDArray[np.float64]]):
a_dict (dict[Hashable, NDArray[np.floating]] | dict[str, NDArray[np.floating]]):
The dictionary to check.

Returns:
Expand Down
8 changes: 4 additions & 4 deletions src/arpes/analysis/deconvolution.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
@update_provenance("Approximate Iterative Deconvolution")
def deconvolve_ice(
data: xr.DataArray,
psf: NDArray[np.float64],
psf: NDArray[np.floating],
n_iterations: int = 5,
deg: int | None = None,
) -> xr.DataArray:
Expand All @@ -57,7 +57,7 @@ def deconvolve_ice(
The deconvoled data in the same format.
"""
data = data if isinstance(data, xr.DataArray) else normalize_to_spectrum(data)
arr: NDArray[np.float64] = data.values
arr: NDArray[np.floating] = data.values
if deg is None:
deg = n_iterations - 3
iteration_steps = list(range(1, n_iterations + 1))
Expand Down Expand Up @@ -165,12 +165,12 @@ def make_psf(

if fwhm:
sigmas = {k: v / (2 * np.sqrt(2 * np.log(2))) for k, v in sigmas.items()}
cov: NDArray[np.float64] = np.zeros((len(sigmas), len(sigmas)), dtype=np.float64)
cov: NDArray[np.floating] = np.zeros((len(sigmas), len(sigmas)), dtype=np.float64)
for i, dim in enumerate(data.dims):
cov[i][i] = sigmas[dim] ** 2 # sigma is deviation, but multivariate_normal uses covariant
logger.debug(f"cov: {cov}")

psf_coords: dict[Hashable, NDArray[np.float64]] = {}
psf_coords: dict[Hashable, NDArray[np.floating]] = {}
for k in data.dims:
psf_coords[str(k)] = np.linspace(
-(pixels[str(k)] - 1) / 2 * strides[str(k)],
Expand Down
6 changes: 3 additions & 3 deletions src/arpes/analysis/derivative.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,10 +45,10 @@ def _nothing_to_array(x: xr.DataArray) -> xr.DataArray:


def _vector_diff(
arr: NDArray[np.float64],
arr: NDArray[np.floating],
delta: tuple[DELTA, DELTA],
n: int = 1,
) -> NDArray[np.float64]:
) -> NDArray[np.floating]:
"""Computes finite differences along the vector delta, given as a tuple.

Using delta = (0, 1) is equivalent to np.diff(..., axis=1), while
Expand Down Expand Up @@ -138,7 +138,7 @@ def _gradient_modulus(
"""
spectrum = data if isinstance(data, xr.DataArray) else normalize_to_spectrum(data)
assert isinstance(spectrum, xr.DataArray)
values: NDArray[np.float64] = spectrum.values
values: NDArray[np.floating] = spectrum.values
gradient_vector = np.zeros(shape=(8, *values.shape))

gradient_vector[0, :-delta, :] = _vector_diff(values, (delta, 0))
Expand Down
2 changes: 1 addition & 1 deletion src/arpes/analysis/filters.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ def boxcar_filter_arr(
if dim not in integered_size or integered_size[str(dim)] == 0:
integered_size[str(dim)] = default_size
widths_pixel: tuple[int, ...] = tuple([integered_size[str(k)] for k in arr.dims])
array_values: NDArray[np.float64] = np.nan_to_num(arr.values, nan=0.0, copy=True)
array_values: NDArray[np.floating] = np.nan_to_num(arr.values, nan=0.0, copy=True)

for _ in range(iteration_n):
array_values = ndimage.uniform_filter(
Expand Down
28 changes: 14 additions & 14 deletions src/arpes/analysis/forward_conversion.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,13 +25,13 @@
from arpes.debug import setup_logger
from arpes.provenance import update_provenance
from arpes.utilities import normalize_to_spectrum
from arpes.utilities.conversion.api import convert_to_kspace
from arpes.utilities.conversion.bounds_calculations import (
euler_to_kx,
euler_to_ky,
euler_to_kz,
full_angles_to_k,
)
from arpes.utilities.conversion.core import convert_to_kspace
from arpes.xarray_extensions.accessor.spectrum_type import EnergyNotation, SpectrumType

if TYPE_CHECKING:
Expand All @@ -53,7 +53,7 @@
LOGLEVEL = LOGLEVELS[1]
logger = setup_logger(__name__, LOGLEVEL)

A = TypeVar("A", NDArray[np.float64], float)
A = TypeVar("A", NDArray[np.floating], float)


def convert_coordinate_forward(
Expand Down Expand Up @@ -154,11 +154,11 @@ def convert_through_angular_pair( # noqa: PLR0913
data: xr.DataArray,
first_point: dict[Hashable, float],
second_point: dict[Hashable, float],
cut_specification: dict[str, NDArray[np.float64]],
transverse_specification: dict[str, NDArray[np.float64]],
cut_specification: dict[str, NDArray[np.floating]],
transverse_specification: dict[str, NDArray[np.floating]],
*,
relative_coords: bool = True,
**k_coords: NDArray[np.float64],
**k_coords: NDArray[np.floating],
) -> xr.DataArray:
"""Converts the lower dimensional ARPES cut passing through `first_point` and `second_point`.

Expand Down Expand Up @@ -263,11 +263,11 @@ def convert_through_angular_pair( # noqa: PLR0913
def convert_through_angular_point(
data: xr.DataArray,
coords: dict[Hashable, float],
cut_specification: dict[str, NDArray[np.float64]],
transverse_specification: dict[str, NDArray[np.float64]],
cut_specification: dict[str, NDArray[np.floating]],
transverse_specification: dict[str, NDArray[np.floating]],
*,
relative_coords: bool = True,
**k_coords: NDArray[np.float64],
**k_coords: NDArray[np.floating],
) -> xr.DataArray:
"""Converts the lower dimensional ARPES cut passing through given angular `coords`.

Expand Down Expand Up @@ -332,13 +332,13 @@ def convert_coordinates(
) -> xr.Dataset:
"""Converts coordinates forward in momentum."""

def unwrap_coord(coord: xr.DataArray | float) -> NDArray[np.float64] | float:
def unwrap_coord(coord: xr.DataArray | float) -> NDArray[np.floating] | float:
if isinstance(coord, xr.DataArray):
return coord.values
return coord

coord_names: set[str] = {"phi", "psi", "alpha", "theta", "beta", "chi", "hv", "eV"}
raw_coords: dict[str, NDArray[np.float64] | float] = {
raw_coords: dict[str, NDArray[np.floating] | float] = {
k: unwrap_coord(arr.S.lookup_offset_coord(k)) for k in coord_names
}
raw_angles = {k: v for k, v in raw_coords.items() if k not in {"eV", "hv"}}
Expand All @@ -357,8 +357,8 @@ def unwrap_coord(coord: xr.DataArray | float) -> NDArray[np.float64] | float:

def expand_to(
cname: str,
c: NDArray[np.float64] | float,
) -> NDArray[np.float64] | float:
c: NDArray[np.floating] | float,
) -> NDArray[np.floating] | float:
if isinstance(c, float):
return c
assert isinstance(c, np.ndarray)
Expand Down Expand Up @@ -460,7 +460,7 @@ def convert_coordinates_to_kspace_forward(arr: XrTypes) -> xr.Dataset:
("chi", "hv", "phi"): ["kx", "ky", "kz"],
}.get(tupled_momentum_compatibles, [])
full_old_dims: list[str] = [*momentum_compatibles, "eV"]
projection_vectors: NDArray[np.float64] = np.ndarray(
projection_vectors: NDArray[np.floating] = np.ndarray(
shape=tuple(len(arr.coords[d]) for d in full_old_dims),
dtype=object,
)
Expand Down Expand Up @@ -549,7 +549,7 @@ def _broadcast_by_dim_location(
data: xr.DataArray,
target_shape: tuple[int, ...],
dim_location: int | None = None,
) -> NDArray[np.float64]:
) -> NDArray[np.floating]:
if isinstance(data, xr.DataArray) and not data.dims:
data = data.item()
if isinstance(
Expand Down
Loading