Skip to content

Commit

Permalink
Update stable dir
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed May 7, 2024
1 parent a48cde5 commit 8ca87b6
Show file tree
Hide file tree
Showing 225 changed files with 24,209 additions and 13,462 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/benchmarking/noise_generation.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/classification.html

Large diffs are not rendered by default.

99 changes: 50 additions & 49 deletions stable/_modules/cleanlab/count.html

Large diffs are not rendered by default.

785 changes: 785 additions & 0 deletions stable/_modules/cleanlab/data_valuation.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/datalab/datalab.html

Large diffs are not rendered by default.

54 changes: 29 additions & 25 deletions stable/_modules/cleanlab/datalab/internal/data.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/datalab/internal/data_issues.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/datalab/internal/issue_finder.html

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

85 changes: 52 additions & 33 deletions stable/_modules/cleanlab/datalab/internal/issue_manager/label.html

Large diffs are not rendered by default.

Large diffs are not rendered by default.

63 changes: 36 additions & 27 deletions stable/_modules/cleanlab/datalab/internal/issue_manager/noniid.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/datalab/internal/issue_manager/null.html

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/datalab/internal/model_outputs.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/datalab/internal/report.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/datalab/internal/task.html

Large diffs are not rendered by default.

62 changes: 36 additions & 26 deletions stable/_modules/cleanlab/dataset.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/experimental/cifar_cnn.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/experimental/coteaching.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/experimental/label_issues_batched.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/experimental/mnist_pytorch.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/experimental/span_classification.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/filter.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/internal/label_quality_utils.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/internal/latent_algebra.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/internal/multiannotator_utils.html

Large diffs are not rendered by default.

78 changes: 37 additions & 41 deletions stable/_modules/cleanlab/internal/multilabel_scorer.html

Large diffs are not rendered by default.

56 changes: 30 additions & 26 deletions stable/_modules/cleanlab/internal/multilabel_utils.html

Large diffs are not rendered by default.

115 changes: 86 additions & 29 deletions stable/_modules/cleanlab/internal/outlier.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/internal/token_classification_utils.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/internal/util.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/internal/validation.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/models/keras.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/multiannotator.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/multilabel_classification/dataset.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/multilabel_classification/filter.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/multilabel_classification/rank.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/object_detection/filter.html

Large diffs are not rendered by default.

67 changes: 36 additions & 31 deletions stable/_modules/cleanlab/object_detection/rank.html

Large diffs are not rendered by default.

72 changes: 43 additions & 29 deletions stable/_modules/cleanlab/object_detection/summary.html

Large diffs are not rendered by default.

96 changes: 63 additions & 33 deletions stable/_modules/cleanlab/outlier.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/rank.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/regression/learn.html

Large diffs are not rendered by default.

65 changes: 38 additions & 27 deletions stable/_modules/cleanlab/regression/rank.html

Large diffs are not rendered by default.

125 changes: 73 additions & 52 deletions stable/_modules/cleanlab/segmentation/filter.html

Large diffs are not rendered by default.

96 changes: 51 additions & 45 deletions stable/_modules/cleanlab/segmentation/rank.html

Large diffs are not rendered by default.

61 changes: 34 additions & 27 deletions stable/_modules/cleanlab/segmentation/summary.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/token_classification/filter.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/token_classification/rank.html

Large diffs are not rendered by default.

52 changes: 28 additions & 24 deletions stable/_modules/cleanlab/token_classification/summary.html

Large diffs are not rendered by default.

53 changes: 29 additions & 24 deletions stable/_modules/index.html

Large diffs are not rendered by default.

8 changes: 8 additions & 0 deletions stable/_sources/cleanlab/data_valuation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
data_valuation
==============

.. automodule:: cleanlab.data_valuation
:autosummary:
:members:
:undoc-members:
:show-inheritance:
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Examples whose given label is estimated to be potentially incorrect (e.g. due to
Datalab estimates which examples appear mislabeled as well as a numeric label quality score for each, which quantifies the likelihood that an example is correctly labeled.

For now, Datalab can only detect label issues in multi-class classification datasets, regression datasets, and multi-label classification datasets.
The cleanlab library has alternative methods you can us to detect label issues in other types of datasets (multi-annotator, token classification, etc.).
The cleanlab library has alternative methods you can use to detect label issues in other types of datasets (multi-annotator, token classification, etc.).

Label issues are calculated based on provided `pred_probs` from a trained model. If you do not provide this argument, but you do provide `features`, then a K Nearest Neighbor model will be fit to produce `pred_probs` based on your `features`. Otherwise if neither `pred_probs` nor `features` is provided, then this type of issue will not be considered.
For the most accurate results, provide out-of-sample `pred_probs` which can be obtained for a dataset via `cross-validation <https://docs.cleanlab.ai/stable/tutorials/pred_probs_cross_val.html>`_.
Expand Down Expand Up @@ -118,10 +118,12 @@ More generally, examples which happen to be duplicated can affect the final mod
Non-IID Issue
-------------

Whether the dataset exhibits statistically significant violations of the IID assumption like: changepoints or shift, drift, autocorrelation, etc. The specific form of violation considered is whether the examples are ordered such that almost adjacent examples tend to have more similar feature values. If you care about this check, do **not** first shuffle your dataset -- this check is entirely based on the sequential order of your data.
Whether the overall dataset exhibits statistically significant violations of the IID assumption like: changepoints or shift, drift, autocorrelation, etc. The specific form of violation considered is whether the examples are ordered within the dataset such that almost adjacent examples tend to have more similar feature values. If you care about this check, do **not** first shuffle your dataset -- this check is entirely based on the sequential order of your data. Learn more via our blog: `https://cleanlab.ai/blog/non-iid-detection/ <https://cleanlab.ai/blog/non-iid-detection/>`_

The Non-IID issue is detected based on provided `features` or `knn_graph`. If you do not provide one of these arguments, this type of issue will not be considered.

The Non-IID issue is really a dataset-level check, not a per-datapoint level check (either a dataset violates the IID assumption or it doesn't). The per-datapoint scores returned for Non-IID issues merely highlight which datapoints you might focus on to better understand this dataset-level issue - there is not necessarily something specifically wrong with these specific datapoints.

Mathematically, the **overall** Non-IID score for the dataset is defined as the p-value of a statistical test for whether the distribution of *index-gap* values differs between group A vs. group B defined as follows. For a pair of examples in the dataset `x1, x2`, we define their *index-gap* as the distance between the indices of these examples in the ordering of the data (e.g. if `x1` is the 10th example and `x2` is the 100th example in the dataset, their index-gap is 90). We construct group A from pairs of examples which are amongst the K nearest neighbors of each other, where neighbors are defined based on the provided `knn_graph` or via distances in the space of the provided vector `features` . Group B is constructed from random pairs of examples in the dataset.

The Non-IID quality score for each example `x` is defined via a similarly computed p-value but with Group A constructed from the K nearest neighbors of `x` and Group B constructed from random examples from the dataset paired with `x`. Learn more about the math behind this method in our paper: `Detecting Dataset Drift and Non-IID Sampling via k-Nearest Neighbors <https://arxiv.org/abs/2305.15696>`_
Expand Down Expand Up @@ -209,7 +211,7 @@ Data Valuation Issue

The examples in the dataset with lowest data valuation scores contribute least to a trained ML model's performance (those whose value falls below a threshold are flagged with this type of issue).

Data valuation issues can only be detected based on a provided `knn_graph` (or one pre-computed during the computation of other issue types). If you do not provide this argument and there isn't a `knn_graph` already stored in the Datalab object, this type of issue will not be considered.
Data valuation issues can be detected based on provided `features` or a provided `knn_graph` (or one pre-computed during the computation of other issue types). If you do not provide one of these two arguments and there isn't a `knn_graph` already stored in the Datalab object, this type of issue will not be considered.

The data valuation score is an approximate Data Shapley value, calculated based on the labels of the top k nearest neighbors of an example. The details of this KNN-Shapley value could be found in the papers: `Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms <https://arxiv.org/abs/1908.08619>`_ and `Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification? <https://arxiv.org/abs/1911.07128>`_.

Expand Down Expand Up @@ -294,7 +296,7 @@ Duplicate Issue Parameters
.. code-block:: python
near_duplicate_kwargs = {
"metric": # string representing the distance metric used in nearest neighbors search (passed as argument to `NearestNeighbors`), if necessary,
"metric": # string or callable representing the distance metric used in nearest neighbors search (passed as argument to `NearestNeighbors`), if necessary,
"k": # integer representing the number of nearest neighbors for nearest neighbors search (passed as argument to `NearestNeighbors`), if necessary,
"threshold": # `threshold` argument to constructor of `NearDuplicateIssueManager()`. Non-negative floating value that determines the maximum distance between two examples to be considered outliers, relative to the median distance to the nearest neighbors,
}
Expand All @@ -314,7 +316,7 @@ Non-IID Issue Parameters
.. code-block:: python
non_iid_kwargs = {
"metric": # `metric` argument to constructor of `NonIIDIssueManager`. String for the distance metric used for nearest neighbors search if necessary. `metric` argument to constructor of `sklearn.neighbors.NearestNeighbors`,
"metric": # `metric` argument to constructor of `NonIIDIssueManager`. String or callable for the distance metric used for nearest neighbors search if necessary. `metric` argument to constructor of `sklearn.neighbors.NearestNeighbors`,
"k": # `k` argument to constructor of `NonIIDIssueManager`. Integer representing the number of nearest neighbors for nearest neighbors search if necessary. `n_neighbors` argument to constructor of `sklearn.neighbors.NearestNeighbors`,
"num_permutations": # `num_permutations` argument to constructor of `NonIIDIssueManager`,
"seed": # seed for numpy's random number generator (used for permutation tests),
Expand Down Expand Up @@ -347,7 +349,7 @@ Underperforming Group Issue Parameters
underperforming_group_kwargs = {
# Constructor arguments for `UnderperformingGroupIssueManager`
"threshold": # Non-negative floating value between 0 and 1 used for determinining group of points with low confidence.
"metric": # String for the distance metric used for nearest neighbors search if necessary. `metric` argument to constructor of `sklearn.neighbors.NearestNeighbors`.
"metric": # String or callable for the distance metric used for nearest neighbors search if necessary. `metric` argument to constructor of `sklearn.neighbors.NearestNeighbors`.
"k": # Integer representing the number of nearest neighbors for constructing the nearest neighbour graph. `n_neighbors` argument to constructor of `sklearn.neighbors.NearestNeighbors`.
"min_cluster_samples": # Non-negative integer value specifying the minimum number of examples required for a cluster to be considered as the underperforming group. Used in `UnderperformingGroupIssueManager.filter_cluster_ids`.
"clustering_kwargs": # Key-value pairs representing arguments for the constructor of the clustering algorithm class (e.g. `sklearn.cluster.DBSCAN`).
Expand Down
10 changes: 4 additions & 6 deletions stable/_sources/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -152,13 +152,10 @@ Link to Cleanlab Studio docs: `help.cleanlab.ai <https://help.cleanlab.ai/>`_
:caption: Tutorials

Datalab Tutorials <tutorials/datalab/index>
CleanLearning Tutorials <tutorials/clean_learning/index>
Workflows of Data-Centric AI <tutorials/indepth_overview>
Image Classification <tutorials/image>
Text Classification <tutorials/text>
Tabular Classification <tutorials/tabular>
Audio Classification <tutorials/audio>
Find Dataset-level Issues <tutorials/dataset_health>
Identifying Outliers <tutorials/outliers>
Analyze Dataset-level Issues <tutorials/dataset_health>
Outlier Detection <tutorials/outliers>
Improving Consensus Labels for Multiannotator Data <tutorials/multiannotator>
Multi-Label Classification <tutorials/multilabel_classification>
Noisy Labels in Regression <tutorials/regression>
Expand All @@ -181,6 +178,7 @@ Link to Cleanlab Studio docs: `help.cleanlab.ai <https://help.cleanlab.ai/>`_
cleanlab/dataset
cleanlab/outlier
cleanlab/multiannotator
cleanlab/data_valuation
cleanlab/multilabel_classification/index
cleanlab/regression/index
cleanlab/token_classification/index
Expand Down
8 changes: 8 additions & 0 deletions stable/_sources/tutorials/clean_learning/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
CleanLearning Tutorials
=======================

.. toctree::
:maxdepth: 1

Text Classification <text>
Tabular Classification (Numeric/Categorical) <tabular>
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Classification with Tabular Data using Scikit-Learn and Cleanlab\n"
"# Classification with Structured/Tabular Data and Noisy Labels\n"
]
},
{
Expand All @@ -15,15 +15,17 @@
"Consider Using Datalab\n",
"<br/>\n",
"\n",
"If interested in detecting a wide variety of issues in your tabular data, check out the [Datalab tabular tutorial](https://docs.cleanlab.ai/stable/tutorials/datalab/tabular.html).\n",
"If interested in detecting a wide variety of issues in your tabular data, check out the [Datalab tabular tutorial](https://docs.cleanlab.ai/stable/tutorials/datalab/tabular.html). Datalab can detect many other types of data issues beyond label issues, whereas CleanLearning is a convenience method to handle noisy labels with sklearn-compatible classification models.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this 5-minute quickstart tutorial, we use cleanlab with scikit-learn models to find potential label errors in a classification dataset with tabular features (numeric/categorical columns). Tabular (or *structured*) data are typically organized in a row/column format and stored in a SQL database or file types like: CSV, Excel, or Parquet. Here we consider a Student Grades dataset, which contains over 900 individuals who have three exam grades and some optional notes, each being assigned a letter grade (their class label). cleanlab automatically identifies _hundreds_ of examples in this dataset that were mislabeled with the incorrect final grade (data entry mistakes). This tutorial shows how to handle noisy labels and produce more robust classification models for your own tabular datasets.\n",
"In this 5-minute quickstart tutorial, we use cleanlab with scikit-learn models to find potential label errors in a classification dataset with tabular features (numeric/categorical columns). Tabular (or *structured*) data are typically organized in a row/column format and stored in a SQL database or file types like: CSV, Excel, or Parquet. Here we consider a Student Grades dataset, which contains over 900 individuals who have three exam grades and some optional notes, each being assigned a letter grade (their class label). cleanlab automatically identifies _hundreds_ of examples in this dataset that were mislabeled with the incorrect final grade (data entry mistakes). \n",
"\n",
"This tutorial shows how to handle noisy labels and produce more robust classification models for your own tabular datasets. cleanlab's `CleanLearning` class automatically detects and filters out such badly labeled data, in order to train a more robust version of any Machine Learning model. No change to your existing modeling code is required! \n",
"\n",
"\n",
"**Overview of what we'll do in this tutorial:**\n",
Expand Down Expand Up @@ -99,7 +101,6 @@
"You can use `pip` to install all packages required for this tutorial as follows:\n",
"\n",
"```ipython3\n",
"!pip install sklearn\n",
"!pip install cleanlab\n",
"# Make sure to install the version corresponding to this tutorial\n",
"# E.g. if viewing master branch documentation:\n",
Expand All @@ -119,7 +120,7 @@
"dependencies = [\"cleanlab\"]\n",
"\n",
"if \"google.colab\" in str(get_ipython()): # Check if it's running in Google Colab\n",
" %pip install cleanlab==v2.6.3\n",
" %pip install cleanlab==v2.6.4\n",
" cmd = ' '.join([dep for dep in dependencies if dep != \"cleanlab\"])\n",
" %pip install $cmd\n",
"else:\n",
Expand Down Expand Up @@ -497,7 +498,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.7"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,16 @@
"Consider Using Datalab\n",
"<br/>\n",
"\n",
"If you are interested in detecting a wide variety of issues in your text dataset, check out the [Datalab text tutorial](https://docs.cleanlab.ai/stable/tutorials/datalab/text.html).\n",
"If you are interested in detecting a wide variety of issues in your text dataset, check out the [Datalab text tutorial](https://docs.cleanlab.ai/stable/tutorials/datalab/text.html). Datalab can detect many other types of data issues beyond label issues, whereas CleanLearning is a convenience method to handle noisy labels with sklearn-compatible classification models.\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this 5-minute quickstart tutorial, we use cleanlab to find potential label errors in an intent classification dataset composed of (text) customer service requests at an online bank. We consider a subset of the [Banking77-OOS Dataset](https://arxiv.org/abs/2106.04564) containing 1,000 customer service requests which can be classified into 10 categories corresponding to the intent of the request. cleanlab will shortlist examples that confuse our ML model the most; many of which are potential label errors, out-of-scope examples, or otherwise ambiguous examples. The `CleanLearning` class allows us to detect and filter out such bad data automatically, in order to train a more robust version of any ML model without having to change your existing modeling code! \n",
"In this 5-minute quickstart tutorial, we use cleanlab to find potential label errors in an intent classification dataset composed of (text) customer service requests at an online bank. We consider a subset of the [Banking77-OOS Dataset](https://arxiv.org/abs/2106.04564) containing 1,000 customer service requests which can be classified into 10 categories corresponding to the intent of the request. cleanlab will shortlist examples that confuse our ML model the most; many of which are potential label errors, out-of-scope examples, or otherwise ambiguous examples. cleanlab's `CleanLearning` class automatically detects and filters out such badly labeled data, in order to train a more robust version of any Machine Learning model. No change to your existing modeling code is required!\n",
"\n",
"\n",
"**Overview of what we'll do in this tutorial:**\n",
"\n",
Expand Down Expand Up @@ -101,7 +102,7 @@
"You can use `pip` to install all packages required for this tutorial as follows:\n",
"\n",
"```ipython3\n",
"!pip install sklearn sentence-transformers\n",
"!pip install sentence-transformers\n",
"!pip install cleanlab\n",
"# Make sure to install the version corresponding to this tutorial\n",
"# E.g. if viewing master branch documentation:\n",
Expand All @@ -128,7 +129,7 @@
"os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\" # disable parallelism to avoid deadlocks with huggingface\n",
"\n",
"if \"google.colab\" in str(get_ipython()): # Check if it's running in Google Colab\n",
" %pip install cleanlab==v2.6.3\n",
" %pip install cleanlab==v2.6.4\n",
" cmd = ' '.join([dep for dep in dependencies if dep != \"cleanlab\"])\n",
" %pip install $cmd\n",
"else:\n",
Expand Down Expand Up @@ -575,7 +576,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.7"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"id": "eVufWTY3jRPx"
},
"source": [
"# Audio Classification with SpeechBrain and Cleanlab\n",
"# Detecting Issues in an Audio Dataset with Datalab\n",
"\n",
"In this 5-minute quickstart tutorial, we use cleanlab to find label issues in the [Spoken Digit dataset](https://www.tensorflow.org/datasets/catalog/spoken_digit) (it's like MNIST for audio). The dataset contains 2,500 audio clips with English pronunciations of the digits 0 to 9 (these are the class labels to predict from the audio).\n",
"\n",
Expand Down Expand Up @@ -65,7 +65,7 @@
"You can use `pip` to install all packages required for this tutorial as follows:\n",
"\n",
"```ipython3\n",
"!pip install tensorflow==2.12.1 sklearn tensorflow_io==0.32.0 huggingface_hub==0.17.0 speechbrain==0.5.13 \n",
"!pip install tensorflow==2.12.1 tensorflow_io==0.32.0 huggingface_hub==0.17.0 speechbrain==0.5.13 \n",
"!pip install \"cleanlab[datalab]\"\n",
"# Make sure to install the version corresponding to this tutorial\n",
"# E.g. if viewing master branch documentation:\n",
Expand All @@ -91,7 +91,7 @@
"os.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"3\" \n",
"\n",
"if \"google.colab\" in str(get_ipython()): # Check if it's running in Google Colab\n",
" %pip install cleanlab==v2.6.3\n",
" %pip install cleanlab==v2.6.4\n",
" cmd = ' '.join([dep for dep in dependencies if dep != \"cleanlab\"])\n",
" %pip install $cmd\n",
"else:\n",
Expand Down
Loading

0 comments on commit 8ca87b6

Please sign in to comment.