-
Notifications
You must be signed in to change notification settings - Fork 75
ReadTheDocs Tutorial additions (redshift and editor interface) #1740
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
7aabd07
Created DeepForge editor interface tutorial
KyleAMoore 5d48ad9
Added redshift tutorial and example docs
KyleAMoore 32c07e1
Made changes requested for PR #1740
KyleAMoore a851670
Updated github repo url
KyleAMoore 0885cbc
Fixed numerous typos
KyleAMoore File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,68 @@ | ||
| Redshift Estimation | ||
| =================== | ||
|
|
||
| The project described on this page can be found in the `examples repo <https://github.com/deepforge-dev/examples/tree/master/redshift-tutorial>`_ on GitHub under the name **Redshift-Application.webgmex** | ||
|
|
||
| This project provides a small collection of generalized pipelines for the training and utilization of redshift estimation models. This project is designed to allow simple use by only requiring that the configuration parameters of individual nodes be defined where necessary. The most involved alterations that should be necessary for most users is the definition of additional architectures in the **Resources** tab. It should be noted that any newly defined architecture should have an output length and input shape that match the *num_bins* and *input_shape* configuration parameters being used in the various pipelines. | ||
|
|
||
| Pipeline Overview | ||
| ----------------- | ||
|
|
||
| * `Train Test Single`_ | ||
| * `Train Test Compare`_ | ||
| * `Download Train Evaluate`_ | ||
| * `Train Predict`_ | ||
| * `Predict Pretrained`_ | ||
| * `Test Pretrained`_ | ||
| * `Download SDSS`_ | ||
| * `Download Train Predict`_ | ||
|
|
||
| .. * `Visualize Predictions`_ | ||
| .. * `Train Visualize`_ | ||
|
|
||
| .. figure:: application-pipelines.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| Pipelines | ||
| --------- | ||
|
|
||
| Train Test Single | ||
| ~~~~~~~~~~~~~~~~~ | ||
| Trains and evaluates a single CNN model. Uses predefined artifacts that contain the training and testing data. For this and all training pipelines, the artifacts should each contain a single numpy array. Input arrays should be a 4D array of shape **(n, y, x, c)** where n=number of images, y=image height, x=image width, and c=number of color channels. Output (label) arrays should be of shape **(n,)** . | ||
|
|
||
| .. Visualize Predictions | ||
| .. ~~~~~~~~~~~~~~~~~~~~~ | ||
|
|
||
|
|
||
| Train Test Compare | ||
| ~~~~~~~~~~~~~~~~~~ | ||
| Trains and evaluates two CNN models and compares effectiveness of the models. | ||
|
|
||
| Download Train Evaluate | ||
| ~~~~~~~~~~~~~~~~~~~~~~~ | ||
| Downloads SDSS images, trains a model on the images, and evaluates the model on a separate set of downloaded images. Care should be taken when defining your own CasJobs query to ensure that all queried galaxies for training have a redshift value below the **Train** node’s *max_val* configuration parameter’s value. | ||
|
|
||
| Train Predict | ||
| ~~~~~~~~~~~~~ | ||
| Trains a single CNN model and uses the newly trained model to predict the redshift value of another set of galaxies. | ||
|
|
||
| Predict Pretrained | ||
| ~~~~~~~~~~~~~~~~~~ | ||
| Predicts the redshift value of a set of galaxies using a pre-existing model that is saved as an artifact. | ||
|
|
||
| Test Pretrained | ||
| ~~~~~~~~~~~~~~~ | ||
| Evaluates the performance of a pre-existing model that is saved as an artifact. | ||
|
|
||
| .. Train Visualize | ||
| .. ~~~~~~~~~~~~~~~ | ||
|
|
||
|
|
||
| Download SDSS | ||
| ~~~~~~~~~~~~~ | ||
| Download SDSS images and save them as artifacts. Can be used in conjunction with the other pipelines that rely on artifacts rather than images retrieved at execution time. | ||
|
|
||
| Download Train Predict | ||
| ~~~~~~~~~~~~~~~~~~~~~~ | ||
| Download SDSS images and use some images to train a model before using the model to predict the redshift value of the remaining galaxies. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,166 @@ | ||
| Tutorial Project - Redshift | ||
| =========================== | ||
| The project described on this page can be found in the `examples repo <https://github.com/deepforge-dev/examples/tree/master/redshift-tutorial>`_ on GitHub under the name **Redshift-Tutorial.webgmex** | ||
|
|
||
| Pipeline Overview | ||
| ----------------- | ||
| 1. `Basic Input/Output`_ | ||
| 2. `Display Random Image`_ | ||
| 3. `Display Random CIFAR-10`_ | ||
| 4. `Train CIFAR-10`_ | ||
| 5. `Train-Test`_ | ||
| 6. `Train-Test-Compare`_ | ||
| 7. `Download-Train-Evaluate`_ | ||
|
|
||
| .. 6. `Visualize Predictions`_ | ||
|
|
||
| Pipelines | ||
| --------- | ||
|
|
||
|
|
||
| Basic Input/Output | ||
| ~~~~~~~~~~~~~~~~~~ | ||
| This pipeline provides one of the simplest examples of a pipeline possible in DeepForge. Its sole purpose is to create an array of numbers, pass the array from the first node to the second node, and print the array to the output console. | ||
brollb marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| The **Output** operation shown is a special built-in operation that will save the data that is provided to it to the selected storage backend. This data will then be available within the same project as an artifact and can be accessed by other pipelines using the special built-in **Input** operation. | ||
|
|
||
| .. figure:: basic-io.png | ||
| :align: center | ||
|
|
||
|
|
||
| .. code-block:: python | ||
|
|
||
| import numpy | ||
|
|
||
| class GenArray(): | ||
| def __init__(self, length=10): | ||
| self.length = length | ||
| return | ||
|
|
||
| def execute(self): | ||
| arr = list(numpy.random.rand(self.length)) | ||
| return arr | ||
|
|
||
|
|
||
| Display Random Image | ||
| ~~~~~~~~~~~~~~~~~~~~ | ||
| .. figure:: display-rand-img.png | ||
| :align: center | ||
|
|
||
| This pipeline’s primary purpose is to show how graphics can be output and viewed. A random noise image is generated and displayed using matplotlib’s pyplot library. Any graphic displayed using the **plt.show()** function can be viewed in the executions tab. | ||
|
|
||
| .. code-block:: python | ||
|
|
||
| from matplotlib import pyplot as plt | ||
| from random import randint | ||
|
|
||
| class DisplayImage(): | ||
| def execute(self, image): | ||
| if len(image.shape) == 4: | ||
| image = image[randint(0, image.shape[0] - 1)] | ||
| plt.imshow(image) | ||
| plt.show() | ||
|
|
||
| Display Random CIFAR-10 | ||
| ~~~~~~~~~~~~~~~~~~~~~~~ | ||
| .. figure:: display-cifar.png | ||
| :align: center | ||
|
|
||
| As with the previous pipeline, this pipeline simply displays a single image. The image from this pipeline, however, is more meaningful, as it is drawn from the commonly used `CIFAR-10 dataset <http://www.cs.toronto.edu/~kriz/cifar.html>`_. This pipeline seeks to provide an example of the input being used in the next pipeline while providing an example of how the data can be obtained. This is important for users who seek to develop their own pipelines, as CIFAR-10 data generally serves as an effective baseline for testing and development of new CNN architectures or training processes. | ||
|
|
||
| Also note, as shown in the figure above, that it is not necessary to utilize all of the outputs of a given node. Unless specifically handled, however, it is generally inappropriate for an input to be left undefined. | ||
|
|
||
| .. code-block:: python | ||
|
|
||
| from keras.datasets import cifar10 | ||
|
|
||
| class GetDataCifar(): | ||
| def execute(self): | ||
| ((train_imgs, train_labels), | ||
| (test_imgs, test_labels)) = cifar10.load_data() | ||
| return train_imgs, train_labels, test_imgs, test_labels | ||
|
|
||
| Train CIFAR-10 | ||
| ~~~~~~~~~~~~~~ | ||
| .. figure:: train-basic.png | ||
| :align: center | ||
|
|
||
| This pipeline gives a very basic example of how to create, train, and evaluate a simple CNN. The primary takeaway from this pipeline should be the overall structure of a training pipeline, which should follow the following steps in most cases: | ||
|
|
||
| 1. Load data | ||
| 2. Define the loss, optimizer, and other metrics | ||
| 3. Compile model, with loss, metrics, and optimizer, using the **compile()** method | ||
| 4. Train model using the **fit()** method, which requires the training inputs and outputs | ||
| 5. Output the trained model for serialization and/or utilization in subsequent nodes | ||
|
|
||
| .. code-block:: python | ||
|
|
||
| import numpy as np | ||
| import keras | ||
|
|
||
| class TrainBasic(): | ||
| def __init__(self, model, epochs=20, batch_size=32, shuffle=True): | ||
| self.model = model | ||
| self.epochs = epochs | ||
| self.batch_size = batch_size | ||
| self.shuffle = shuffle | ||
| return | ||
|
|
||
| def execute(self, train_imgs, train_labels): | ||
| opt = keras.optimizers.rmsprop(lr=0.001) | ||
| self.model.compile(loss='sparse_categorical_crossentropy', | ||
| optimizer=opt, | ||
| metrics=['sparse_categorical_accuracy']) | ||
| self.model.fit(train_imgs, | ||
| train_labels, | ||
| batch_size=self.batch_size, | ||
| epochs=self.epochs, | ||
| shuffle=self.shuffle, | ||
| verbose=2) | ||
| model = self.model | ||
| return model | ||
|
|
||
| .. code-block:: python | ||
|
|
||
| class EvalBasic(): | ||
| def __init__(self): | ||
| return | ||
|
|
||
| def execute(self, model, test_imgs, test_labels): | ||
| results = model.evaluate(test_imgs, test_labels, verbose=0) | ||
| for i, metric in enumerate(model.metrics_names): | ||
| print(metric,'-',results[i]) | ||
| return results | ||
|
|
||
| Train-Test | ||
| ~~~~~~~~~~ | ||
| .. figure:: train-basic.png | ||
| :align: center | ||
|
|
||
| This pipeline provides an example of how one might train and evaluate a redshift estimation model. In particular, the procedure implemented here is a simplified version of work by `Pasquet et. al. (2018) <https://www.aanda.org/articles/aa/abs/2019/01/aa33617-18/aa33617-18.html>`_. For readers unfamiliar with cosmological redshift, `this article <https://earthsky.org/astronomy-essentials/what-is-a-redshift>`_ provides a simple and brief introduction to the topic. For the training process, there are two primary additions that should be noted. | ||
|
|
||
| First, the **Train** class has been given a function named **to_categorical**. In line with the Paquet et. al. method linked above, this tutorial uses a classification model rather than a regression model for estimation. Because we are using classification models, the keras model expects the output labels to be either one-hot vectors or a single integer where the position/value indicates the range in which the true redshift value falls. This function converts the continuous redshift values into the necessary discrete, categorical format. | ||
|
|
||
| Second, a class has been provided to give examples of how researchers may define their own `keras Sequence <https://keras.io/api/utils/python_utils/#sequence-class>`_ for training. Sequences are helpful in that they allow alterations to be made to the data during training. In the example given here, the **SdssSequence** class provides the ability to rotate or flip images before every epoch, which will hopefully improve the robustness of the final model. | ||
|
|
||
| The evaluation node has also been updated to provide metrics more in line with redshift estimation. Specifically, it calculates the fraction of outlier predictions, the model’s prediction bias, the deviation in the MAD scores of the model output, and the average Continuous Ranked Probability Score (CRPS) of the output. | ||
|
|
||
|
|
||
| .. Visualize Predictions | ||
| .. ~~~~~~~~~~~~~~~~~~~~~ | ||
|
|
||
|
|
||
| Train-Test-Compare | ||
| ~~~~~~~~~~~~~~~~~~ | ||
| .. figure:: train-compare.png | ||
| :align: center | ||
|
|
||
| This pipeline gives a more complicated example of how to create visualizations that may be helpful for understanding the effectiveness of a model. The **EvalCompare** node provides a simple comparison visualization of two models. | ||
|
|
||
|
|
||
| Download-Train-Evaluate | ||
| ~~~~~~~~~~~~~~~~~~~~~~~ | ||
| .. figure:: download.png | ||
| :align: center | ||
|
|
||
| This pipeline provides an example of how data can be retrieved and utilized in the same pipeline. The previous pipelines use manually uploaded artifacts. In many real cases, users may desire to retrieve novel data or more specific data using SciServer’s CasJobs API. In such cases, the **DownloadSDSS** node here makes downloading data relatively simple for users. It should be noted that the data downloaded is not in a form easily usable by our models and first requires moderate preprocessing, which is performed in the **Preprocessing** node. This general structure of download-process-train is a common pattern, as data is rarely supplied in a clean, immediately usable format. | ||
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,108 @@ | ||
| Interface Overview | ||
| ================== | ||
| The DeepForge editor interface is separated into six views for defining all of the necessary features of your desired project. The details of each interface tab are detailed below. You can switch to any of the views at any time by clicking the appropriate icon on the left side of the screen. In order, the tabs are: | ||
|
|
||
| +---------------+--------------------------+ | ||
| | |tabs| | - Pipelines_ | | ||
| | | - Executions_ | | ||
| | | - Resources_ | | ||
| | | - Artifacts_ | | ||
| | | - `Custom Utils`_ | | ||
| | | - `Custom Serialization`_| | ||
| +---------------+--------------------------+ | ||
|
|
||
| .. |tabs| image:: interface_tabs.png | ||
|
|
||
| Pipelines | ||
| --------- | ||
| .. figure:: pipelines_tab.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| In the initial view, all pipelines that currently exist in the project are displayed. New pipelines can be created using the floating red button in the bottom right. From this screen, existing pipelines can also be opened for editing, deleted, or renamed. | ||
|
|
||
| Pipeline editing | ||
| ~~~~~~~~~~~~~~~~ | ||
| .. figure:: pipeline_example.png | ||
| :align: center | ||
| :width: 50% | ||
|
|
||
| DeepForge pipelines are directed acyclic graphs of operations, where each operation is an isolated python module. Operations are added to a pipeline using the red plus button in the bottom right of the workspace. Any operations that have previously been defined in the project can be added to the pipeline, or new operations can be created when needed. Arrows in the workspace indicate the passing of data between operations. These arrows can be created by clicking on the desired output (bottom circles) of the first operation before clicking on the desired input (top circles) of the second operation. Clicking on a operation also gives the options to delete (red X), edit (blue </>), or change attributes. Information on the editing of operations can be found in `Custom Operations <custom_operations.rst>`_ | ||
|
|
||
| Pipelines are executed by clicking the yellow play button in the bottom right of the workspace. In the window that appears, you can name the execution, select a computation platform, and select a storage platform. Computation platforms specify what the compute resources used for execution of the operations, such as `SciServer Compute <https://apps.sciserver.org/compute/>`_, will be. Supported storage platforms, such as endpoints with an S3-compatible API, are used to store intermediate and output data. The provided storage option will be used for storing both the output objects defined in the pipeline, as well as all files used in execution of the pipeline. | ||
|
|
||
| .. figure:: execute_pipeline.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| Executions | ||
| ---------- | ||
| .. figure:: executions_tab.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| This view allows the review of previous pipeline executions. Clicking on any execution will display any plotted data generated by the pipeline, and selecting multiple executions will display all of the selected plots together. Clicking the provided links will open either the associated pipeline or a trace of the execution (shown below). The blue icon in the top right of every operation allows viewing the text output of that operation. The execution trace can be viewed during execution to check the status of a running job. During execution, the color of a operation indicates its current status. The possible statuses are: | ||
|
|
||
| - **Dark gray**: Pending Execution | ||
| - **Light gray**: Execution Queued | ||
| - **Yellow**: Execution in Progress | ||
| - **Orange**: Execution Cancelled | ||
| - **Green**: Successfully Finished Execution | ||
| - **Red**: Execution Failed | ||
|
|
||
| .. figure:: execution_finished.png | ||
| :align: center | ||
| :width: 50% | ||
|
|
||
| Resources | ||
| --------- | ||
| .. figure:: resources_tab.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| This view shows the resources available for use in pipelines. Different types of resources are made available through DeepForge extensions and enable the introduction of new concepts into the project. One such example is `deepforge-keras <https://github.com/deepforge-dev/deepforge-keras>`_ which enables users to make neural networks architectures with a custom visual editor. The created architectures can then be referenced and used by operations for tasks such as training. From this view, resources can be created, deleted, and renamed. | ||
|
|
||
| .. figure:: neural_network.png | ||
| :align: center | ||
| :width: 50% | ||
|
|
||
| As with pipelines, the neural networks are depicted as directed graphs. Each node in the graph corresponds to a single layer or operation in the network (information on operations can be found on the `keras website <https://keras.io/api/>`_). Clicking on a layer provides the ability to change the attributes of that layer, delete the layer, or add new layers before or after the current layer. Many operations require that certain attributes be defined before use. The Conv2D operation pictured above, for example, requires that the *filters* and *kernel_size* attributes be defined. If these are left as *<none>*, a visual indicator will show that there is an error to help prevent mistakes. In order to ease analysis and development, hovering over any connecting line will display the shape of the data as it moves between the given layers. | ||
|
|
||
| Artifacts | ||
| --------- | ||
| .. figure:: artifacts_tab.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| In this view, you can see all artifacts that are available to your pipelines. These artifacts can be used in any pipeline through the inclusion of the built in **Input** operation. Artifacts are pieces of saved data that may be associated with some Python data type. Any arbitrary type of data may be used for creating an artifact, but if a data type is not specified, or if a data type is not provided with a `custom serialization <Custom Serialization_>`_, the artifact will be treated as a `pickle object <https://docs.python.org/3/library/pickle.html>`_. If you have data that cannot be opened with Python's pickle module, you will need to create a custom serialization as described below. Some deepforge extenstions may also support additional data types by default. DeepForge-Keras, for example, supports saved keras models, in addition to the standard pickle objects, without the need for custom serialization. | ||
|
|
||
| A new artifact can be created in one of three ways. First, artifacts are automatically created during the execution of any pipeline that includes the built-in **Output** operation. Second, artifacts can be directly uploaded in this view using the red upload button in the bottom right of the workspace. Using this option will also upload the artifact to the storage platform specified in the popup window. Finally, artifacts that already exist in one of the storage platforms can be imported using the blue import button in the bottom right of the workspace. | ||
|
|
||
| |import| |upload| | ||
|
|
||
| .. |import| image:: import_artifact.png | ||
| :width: 45% | ||
| .. |upload| image:: upload_artifact.png | ||
| :width: 45% | ||
|
|
||
|
|
||
| Custom Utils | ||
| ------------ | ||
| .. figure:: custom_utils.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| This view allows the creation and editing of custom utility modules. Utilities created here can be imported into any pipeline operation. For example, the *swarp_config_string* shown above can be printed out in a operation using the following code: | ||
|
|
||
| .. code-block:: python | ||
|
|
||
| import utils.swarp_string as ss | ||
| print(ss.swarp_config_string) | ||
|
|
||
| Custom Serialization | ||
| -------------------- | ||
| .. figure:: custom_serializer.png | ||
| :align: center | ||
| :width: 75% | ||
|
|
||
| In this view, you can create custom serialization protocols for the creation and use of artifacts that are neither python pickle objects nor keras models. To create a serialization, you will need to define two functions, one for serialization and one for deserialization. These functions must then be passed as arguments to the *deepforge.serialization.register* function as shown in the commented code above. The serializer and deserializer should have the same signatures as the dump and load functions respectively from python's `pickle module <https://docs.python.org/3/library/pickle.html>`_. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.