-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
feat: add TxtSlicesDataset to allow sampling slices from txt file for benchmarking #30156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…al utils to another file, added basic test Signed-off-by: Julien Debache <jdebache@cpu-0007.cm.cluster> Signed-off-by: <>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces TxtSlicesDataset for benchmarking, which samples data from a text file. It also includes significant refactoring by moving utility functions from datasets.py to a new dataset_utils.py file and improving typing throughout. The changes are well-structured. My review focuses on improving the robustness and reproducibility of the new TxtSlicesDataset and its tests. I've pointed out a resource leak in the tests and potential for non-reproducible behavior due to the use of the global random module. I've also identified a missing check that could lead to a crash with certain input files.
| def test_txt_slices(hf_tokenizer: PreTrainedTokenizerBase) -> None: | ||
| # Write the text content to a temporary file | ||
| with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".txt") as f: | ||
| f.write(text_content) | ||
| temp_file_path = f.name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test creates a temporary file using tempfile.NamedTemporaryFile with delete=False but never cleans it up. This will leave temporary files on the system after the test suite runs, which can accumulate and consume disk space. It's better to use pytest's built-in tmp_path fixture, which automatically manages the lifecycle of temporary directories and files for tests.
With this change, you will also need to update the TxtSlicesDataset instantiation on line 34 to use str(temp_file_path).
def test_txt_slices(hf_tokenizer: PreTrainedTokenizerBase, tmp_path) -> None:
# Write the text content to a temporary file
temp_file_path = tmp_path / "test.txt"
temp_file_path.write_text(text_content)| if len(self.text) == 0: | ||
| raise ValueError("The text file is empty and cannot be sampled from.") | ||
|
|
||
| random.seed(self.random_seed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using random.seed() seeds the global random number generator, which can lead to non-reproducible benchmarks if other parts of the code also use the global random module. To ensure reproducibility and avoid side effects, it's better to use a dedicated random.Random instance for this class. You will also need to update generate_prompt to use this instance.
| random.seed(self.random_seed) | |
| self.rng = random.Random(self.random_seed) |
| num_available_tokens = len(token_ids) | ||
|
|
||
| # Randomly select a start position | ||
| start_pos = random.randint(0, num_available_tokens - 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To complete the change to using a dedicated random number generator instance, this call should use self.rng.randint() instead of the global random.randint() to ensure benchmark reproducibility.
| start_pos = random.randint(0, num_available_tokens - 1) | |
| start_pos = self.rng.randint(0, num_available_tokens - 1) |
| **kwargs, | ||
| ) -> list[SampleRequest]: | ||
| # Tokenize the entire text content | ||
| token_ids = self.get_token_ids(tokenizer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code tokenizes the text but doesn't handle the case where the tokenization results in an empty list of tokens (e.g., if the text file contains only whitespace). This will lead to a ValueError in generate_prompt from random.randint(0, -1) or a ZeroDivisionError from the modulo operation if len(token_ids) is 0. An explicit check should be added after tokenization to prevent this crash.
| token_ids = self.get_token_ids(tokenizer) | |
| token_ids = self.get_token_ids(tokenizer) | |
| if not token_ids: | |
| raise ValueError("Tokenized text is empty and cannot be sampled from.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| input_len=args.txt_slices_input_len, | ||
| output_len=args.txt_slices_output_len, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Define txt-slices CLI args used in get_samples
The txt-slices branch of get_samples reads args.txt_slices_input_len and args.txt_slices_output_len, but add_dataset_parser only defines the random input/output options and never creates these txt-slices attributes. Passing --dataset-name txt-slices will therefore raise an AttributeError before sampling. Please add parser definitions for these arguments or reuse the existing random lengths when wiring the dataset.
Useful? React with 👍 / 👎.
Purpose
Sampling randomly directly from a tokenizer for benchmarking creates data that is not ideal to benchmark when using speculative decoding or expert parallelism.
On the other hand, random datasets are very flexible and offer complete control on the input and output sequence lengths, which is desirable to create reproducible benchmarks.
This PR introduces a new type of benchmarking dataset called
TxtSlicesDatasetwhich offers a compromise between the flexibility of a random dataset and the fidelity of a real dataset. It allows sampling slices from a user-provided txt file.Content
TxtSlicesDatasetdatasets.pydatasets.pyin an attempt to bring the file to a more manageable size