Skip to content
This repository was archived by the owner on Dec 23, 2025. It is now read-only.

Conversation

@erin2722
Copy link
Collaborator

Applying google#176 to our fork so that we can upgrade with perf benefits now.

@erin2722 erin2722 force-pushed the mathias-perf-tuning branch from 911f364 to 90b7d1b Compare February 5, 2024 15:18
std::memmove(dst + kShortMemCopy,
static_cast<const uint8_t*>(src) + kShortMemCopy,
64 - kShortMemCopy);
FixedSizeMemMove<kShortMemCopy>(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Q] We've already copied kShortMemCopy bytes on line 1126. This code used to copy the remaining bytes if size > kShortMemCopy by copying 64 - kShortMemCopy bytes. The new code, however, always copies kShortMemCopy, assuming 2 * kShortMemCopy == 64, right? If so, do we need to have a static assertion verifying this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is defined as 32 on line 1100, 30 lines up. Adding a static assert that 32*2 == 64 seems silly. Also the original AVX impl was relying on this as well.

Copy link

@samanca samanca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just have one question for @RedBeard0531, otherwise LGTM.

@erin2722 erin2722 merged commit 80c9300 into v1.1.10 Feb 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants