Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
File size: 17,548 Bytes
d822266
1
{"head_branch": "master", "contributor": "keras-team", "sha_fail": "2989b6f8fc2aca8bf8d0ffa32e58ceeaa3ef35ef", "sha_success": "0484165fa7cba3c095d95da19b1c45945b47c60b", "language": "Python", "repo_owner": "keras-team", "repo_name": "keras", "workflow_name": "Tests", "workflow_filename": "actions.yml", "workflow_path": ".github/workflows/actions.yml", "workflow": "name: Tests\n\non:\n  push:\n    branches: [ master ]\n  pull_request:\n  release:\n    types: [created]\n\npermissions:\n  contents: read\n\njobs:\n  build:\n    strategy:\n      fail-fast: false\n      matrix:\n        python-version: [3.9]\n        backend: [tensorflow, jax, torch, numpy]\n    name: Run tests\n    runs-on: ubuntu-latest\n    env:\n      PYTHON: ${{ matrix.python-version }}\n      KERAS_BACKEND: ${{ matrix.backend }}\n    steps:\n      - uses: actions/checkout@v3\n      - name: Check for changes in keras/applications\n        uses: dorny/paths-filter@v2\n        id: filter\n        with:\n          filters: |\n            applications:\n              - 'keras/applications/**'\n      - name: Set up Python\n        uses: actions/setup-python@v4\n        with:\n          python-version: ${{ matrix.python-version }}\n      - name: Get pip cache dir\n        id: pip-cache\n        run: |\n          python -m pip install --upgrade pip setuptools\n          echo \"dir=$(pip cache dir)\" >> $GITHUB_OUTPUT\n      - name: pip cache\n        uses: actions/cache@v3\n        with:\n          path: ${{ steps.pip-cache.outputs.dir }}\n          key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}-${{ hashFiles('requirements.txt') }}\n      - name: Install dependencies\n        run: |\n          pip install -r requirements.txt --progress-bar off --upgrade\n          pip install -e \".\" --progress-bar off --upgrade\n      - name: Test applications with pytest\n        if: ${{ steps.filter.outputs.applications == 'true' }}\n        run: |\n          pytest keras/applications --cov=keras.applications\n          coverage xml --include='keras/applications/*' -o apps-coverage.xml\n      - name: Codecov keras.applications\n        if: ${{ steps.filter.outputs.applications == 'true' }}\n        uses: codecov/codecov-action@v3\n        with:\n          env_vars: PYTHON,KERAS_BACKEND\n          flags: keras.applications,keras.applications-${{ matrix.backend }}\n          files: apps-coverage.xml\n          fail_ci_if_error: true\n      - name: Test with pytest\n        run: |\n          pytest keras --ignore keras/applications --cov=keras\n          coverage xml --omit='keras/applications/*' -o core-coverage.xml\n      - name: Codecov keras\n        uses: codecov/codecov-action@v3\n        with:\n          env_vars: PYTHON,KERAS_BACKEND\n          flags: keras,keras-${{ matrix.backend }}\n          files: core-coverage.xml\n          fail_ci_if_error: true\n\n  format:\n    name: Check the code format\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v3\n      - name: Set up Python 3.9\n        uses: actions/setup-python@v4\n        with:\n          python-version: '3.9'\n      - name: Get pip cache dir\n        id: pip-cache\n        run: |\n          python -m pip install --upgrade pip setuptools\n          echo \"dir=$(pip cache dir)\" >> $GITHUB_OUTPUT\n      - name: pip cache\n        uses: actions/cache@v3\n        with:\n          path: ${{ steps.pip-cache.outputs.dir }}\n          key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}-${{ hashFiles('requirements.txt') }}\n      - name: Install dependencies\n        run: |\n          pip install -r requirements.txt --progress-bar off --upgrade\n          pip install -e \".\" --progress-bar off --upgrade\n      - name: Lint\n        run: bash shell/lint.sh\n", "logs": "keras/testing/test_utils.py                                     11      1      4      1    87%\nkeras/trainers/__init__.py                                       0      0      0      0   100%\nkeras/trainers/compile_utils.py                                356     54    281     39    81%\nkeras/trainers/data_adapters/__init__.py                         0      0      0      0   100%\nkeras/trainers/data_adapters/array_data_adapter.py             141     19     58     11    84%\nkeras/trainers/data_adapters/data_adapter.py                    13      0      0      0   100%\nkeras/trainers/data_adapters/data_adapter_utils.py              92     21     61     13    73%\nkeras/trainers/data_adapters/generator_data_adapter.py          41      3      8      3    88%\nkeras/trainers/data_adapters/py_dataset_adapter.py             252     49     80     17    76%\nkeras/trainers/data_adapters/tf_dataset_adapter.py              47      0     20      0   100%\nkeras/trainers/data_adapters/torch_data_adapter.py              46      3     10      3    89%\nkeras/trainers/epoch_iterator.py                               105     13     72     13    83%\nkeras/trainers/trainer.py                                      193     23     89     14    84%\nkeras/utils/__init__.py                                         24      0      0      0   100%\nkeras/utils/argument_validation.py                              43     10     26      5    78%\nkeras/utils/audio_dataset_utils.py                              87     16     52      4    83%\nkeras/utils/backend_utils.py                                    37      6     12      3    78%\nkeras/utils/code_stats.py                                       40      3     34      2    91%\nkeras/utils/dataset_utils.py                                   286     61    202     45    74%\nkeras/utils/dtype_utils.py                                      25      0     16      0   100%\nkeras/utils/file_utils.py                                      221     52    129     23    71%\nkeras/utils/image_dataset_utils.py                              91      6     48      6    90%\nkeras/utils/image_utils.py                                     149     73     80     13    45%\nkeras/utils/io_utils.py                                         34      0     10      0   100%\nkeras/utils/jax_utils.py                                         7      3      4      1    45%\nkeras/utils/model_visualization.py                             193    169     90      0     8%\nkeras/utils/module_utils.py                                     32      1      6      1    95%\nkeras/utils/naming.py                                           34      1      8      1    95%\nkeras/utils/nest.py                                             42      9     18      3    77%\nkeras/utils/numerical_utils.py                                  58      4     26      5    89%\nkeras/utils/progbar.py                                         132     25     60      9    78%\nkeras/utils/python_utils.py                                     62      5     28      4    90%\nkeras/utils/rng_utils.py                                        16      1      6      3    82%\nkeras/utils/sequence_utils.py                                   41      8     24      6    78%\nkeras/utils/shape_utils.py                                      15      1     19      1    94%\nkeras/utils/summary_utils.py                                   212     39    116     18    77%\nkeras/utils/text_dataset_utils.py                               68      4     40      5    90%\nkeras/utils/tf_utils.py                                         68     32     38      6    53%\nkeras/utils/timeseries_dataset_utils.py                         62      4     48      5    92%\nkeras/utils/torch_utils.py                                      23      2      6      1    90%\nkeras/utils/traceback_utils.py                                 107     79     48      1    19%\nkeras/utils/tracking.py                                         97     10     54      5    82%\nkeras/version.py                                                 1      0      0      0   100%\n----------------------------------------------------------------------------------------------\nTOTAL                                                        34210  12322  13279   1442    62%\n\n=========================== short test summary info ============================\nFAILED keras/callbacks/model_checkpoint_test.py::ModelCheckpointTest::test_model_checkpoint_loading - ValueError: Arguments `target` and `output` must have the same shape. Received: target.shape=torch.Size([5, 1]), output.shape=torch.Size([5, 2])\n=========== 1 failed, 2902 passed, 306 skipped, 1 xpassed in 54.27s ============\n##[error]Process completed with exit code 1.\n", "diff": "diff --git a/examples/keras_io/vision/knowledge_distillation.py b/examples/keras_io/vision/knowledge_distillation.py\nnew file mode 100644\nindex 0000000000..8b0f2796a8\n--- /dev/null\n+++ b/examples/keras_io/vision/knowledge_distillation.py\n@@ -0,0 +1,246 @@\n+\"\"\"\n+Title: Knowledge Distillation\n+Author: [Kenneth Borup](https://twitter.com/Kennethborup)\n+Converted to Keras 3 by: [Md Awsafur Rahman](https://awsaf49.github.io)\n+Date created: 2020/09/01\n+Last modified: 2020/09/01\n+Description: Implementation of classical Knowledge Distillation.\n+Accelerator: GPU\n+\"\"\"\n+\"\"\"\n+## Introduction to Knowledge Distillation\n+\n+Knowledge Distillation is a procedure for model\n+compression, in which a small (student) model is trained to match a large pre-trained\n+(teacher) model. Knowledge is transferred from the teacher model to the student\n+by minimizing a loss function, aimed at matching softened teacher logits as well as\n+ground-truth labels.\n+\n+The logits are softened by applying a \"temperature\" scaling function in the softmax,\n+effectively smoothing out the probability distribution and revealing\n+inter-class relationships learned by the teacher.\n+\n+**Reference:**\n+\n+- [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531)\n+\"\"\"\n+\n+\"\"\"\n+## Setup\n+\"\"\"\n+\n+import os\n+\n+os.environ[\"KERAS_BACKEND\"] = \"tensorflow\"\n+\n+import keras\n+from keras import layers\n+from keras import ops\n+import tensorflow as tf\n+import numpy as np\n+\n+\"\"\"\n+## Construct `Distiller()` class\n+\n+The custom `Distiller()` class, overrides the `Model` methods `compile`, `compute_loss`,\n+and `call`. In order to use the distiller, we need:\n+\n+- A trained teacher model\n+- A student model to train\n+- A student loss function on the difference between student predictions and ground-truth\n+- A distillation loss function, along with a `temperature`, on the difference between the\n+soft student predictions and the soft teacher labels\n+- An `alpha` factor to weight the student and distillation loss\n+- An optimizer for the student and (optional) metrics to evaluate performance\n+\n+In the `compute_loss` method, we perform a forward pass of both the teacher and student,\n+calculate the loss with weighting of the `student_loss` and `distillation_loss` by `alpha`\n+and `1 - alpha`, respectively. Note: only the student weights are updated.\n+\"\"\"\n+\n+\n+class Distiller(keras.Model):\n+    def __init__(self, student, teacher):\n+        super().__init__()\n+        self.teacher = teacher\n+        self.student = student\n+\n+    def compile(\n+        self,\n+        optimizer,\n+        metrics,\n+        student_loss_fn,\n+        distillation_loss_fn,\n+        alpha=0.1,\n+        temperature=3,\n+    ):\n+        \"\"\"Configure the distiller.\n+\n+        Args:\n+            optimizer: Keras optimizer for the student weights\n+            metrics: Keras metrics for evaluation\n+            student_loss_fn: Loss function of difference between student\n+                predictions and ground-truth\n+            distillation_loss_fn: Loss function of difference between soft\n+                student predictions and soft teacher predictions\n+            alpha: weight to student_loss_fn and 1-alpha to distillation_loss_fn\n+            temperature: Temperature for softening probability distributions.\n+                Larger temperature gives softer distributions.\n+        \"\"\"\n+        super().compile(optimizer=optimizer, metrics=metrics)\n+        self.student_loss_fn = student_loss_fn\n+        self.distillation_loss_fn = distillation_loss_fn\n+        self.alpha = alpha\n+        self.temperature = temperature\n+\n+    def compute_loss(\n+        self, x=None, y=None, y_pred=None, sample_weight=None, allow_empty=False\n+    ):\n+        teacher_pred = self.teacher(x, training=False)\n+        student_loss = self.student_loss_fn(y, y_pred)\n+\n+        distillation_loss = self.distillation_loss_fn(\n+            ops.softmax(teacher_pred / self.temperature, axis=1),\n+            ops.softmax(y_pred / self.temperature, axis=1),\n+        ) * (self.temperature**2)\n+\n+        loss = self.alpha * student_loss + (1 - self.alpha) * distillation_loss\n+        return loss\n+\n+    def call(self, x):\n+        return self.student(x)\n+\n+\n+\"\"\"\n+## Create student and teacher models\n+\n+Initialy, we create a teacher model and a smaller student model. Both models are\n+convolutional neural networks and created using `Sequential()`,\n+but could be any Keras model.\n+\"\"\"\n+\n+# Create the teacher\n+teacher = keras.Sequential(\n+    [\n+        keras.Input(shape=(28, 28, 1)),\n+        layers.Conv2D(256, (3, 3), strides=(2, 2), padding=\"same\"),\n+        layers.LeakyReLU(negative_slope=0.2),\n+        layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding=\"same\"),\n+        layers.Conv2D(512, (3, 3), strides=(2, 2), padding=\"same\"),\n+        layers.Flatten(),\n+        layers.Dense(10),\n+    ],\n+    name=\"teacher\",\n+)\n+\n+# Create the student\n+student = keras.Sequential(\n+    [\n+        keras.Input(shape=(28, 28, 1)),\n+        layers.Conv2D(16, (3, 3), strides=(2, 2), padding=\"same\"),\n+        layers.LeakyReLU(negative_slope=0.2),\n+        layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding=\"same\"),\n+        layers.Conv2D(32, (3, 3), strides=(2, 2), padding=\"same\"),\n+        layers.Flatten(),\n+        layers.Dense(10),\n+    ],\n+    name=\"student\",\n+)\n+\n+# Clone student for later comparison\n+student_scratch = keras.models.clone_model(student)\n+\n+\"\"\"\n+## Prepare the dataset\n+\n+The dataset used for training the teacher and distilling the teacher is\n+[MNIST](https://keras.io/api/datasets/mnist/), and the procedure would be equivalent for\n+any other\n+dataset, e.g. [CIFAR-10](https://keras.io/api/datasets/cifar10/), with a suitable choice\n+of models. Both the student and teacher are trained on the training set and evaluated on\n+the test set.\n+\"\"\"\n+\n+# Prepare the train and test dataset.\n+batch_size = 64\n+(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n+\n+# Normalize data\n+x_train = x_train.astype(\"float32\") / 255.0\n+x_train = np.reshape(x_train, (-1, 28, 28, 1))\n+\n+x_test = x_test.astype(\"float32\") / 255.0\n+x_test = np.reshape(x_test, (-1, 28, 28, 1))\n+\n+\n+\"\"\"\n+## Train the teacher\n+\n+In knowledge distillation we assume that the teacher is trained and fixed. Thus, we start\n+by training the teacher model on the training set in the usual way.\n+\"\"\"\n+\n+# Train teacher as usual\n+teacher.compile(\n+    optimizer=keras.optimizers.Adam(),\n+    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n+    metrics=[keras.metrics.SparseCategoricalAccuracy()],\n+)\n+\n+# Train and evaluate teacher on data.\n+teacher.fit(x_train, y_train, epochs=5)\n+teacher.evaluate(x_test, y_test)\n+\n+\"\"\"\n+## Distill teacher to student\n+\n+We have already trained the teacher model, and we only need to initialize a\n+`Distiller(student, teacher)` instance, `compile()` it with the desired losses,\n+hyperparameters and optimizer, and distill the teacher to the student.\n+\"\"\"\n+\n+# Initialize and compile distiller\n+distiller = Distiller(student=student, teacher=teacher)\n+distiller.compile(\n+    optimizer=keras.optimizers.Adam(),\n+    metrics=[keras.metrics.SparseCategoricalAccuracy()],\n+    student_loss_fn=keras.losses.SparseCategoricalCrossentropy(\n+        from_logits=True\n+    ),\n+    distillation_loss_fn=keras.losses.KLDivergence(),\n+    alpha=0.1,\n+    temperature=10,\n+)\n+\n+# Distill teacher to student\n+distiller.fit(x_train, y_train, epochs=3)\n+\n+# Evaluate student on test dataset\n+distiller.evaluate(x_test, y_test)\n+\n+\"\"\"\n+## Train student from scratch for comparison\n+\n+We can also train an equivalent student model from scratch without the teacher, in order\n+to evaluate the performance gain obtained by knowledge distillation.\n+\"\"\"\n+\n+# Train student as doen usually\n+student_scratch.compile(\n+    optimizer=keras.optimizers.Adam(),\n+    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n+    metrics=[keras.metrics.SparseCategoricalAccuracy()],\n+)\n+\n+# Train and evaluate student trained from scratch.\n+student_scratch.fit(x_train, y_train, epochs=3)\n+student_scratch.evaluate(x_test, y_test)\n+\n+\"\"\"\n+If the teacher is trained for 5 full epochs and the student is distilled on this teacher\n+for 3 full epochs, you should in this example experience a performance boost compared to\n+training the same student model from scratch, and even compared to the teacher itself.\n+You should expect the teacher to have accuracy around 97.6%, the student trained from\n+scratch should be around 97.6%, and the distilled student should be around 98.1%. Remove\n+or try out different seeds to use different weight initializations.\n+\"\"\"\n"}