-
Notifications
You must be signed in to change notification settings - Fork 779
Add CUDA argmax kernel for LLM sampler #16386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16386
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 2 Unrelated FailuresAs of commit f8cd4d2 with merge base c5d66a5 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| #include <executorch/extension/llm/sampler/cuda_sampler.h> | ||
| #include <executorch/runtime/platform/log.h> | ||
|
|
||
| namespace executorch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider using nested namespace to follow c++ 17 standard
| (const nv_bfloat16*)logits, rows, vocab, out_token, out_maxlogit); | ||
| break; | ||
| default: | ||
| // Unsupported type, fall back to float |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhapes we need to raise error here to avoid silent error?
Add a CUDA kernel for argmax operation to support GPU-based sampling:
for efficient parallel max finding. Supports float, half, and bfloat16.
handles device-to-host copy, and synchronization.
data types, edge cases, and numerical precision.
and GoogleTest-based unit tests.