{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {},
"id": "view-in-github"
},
"source": [
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"# Tutorial 1: Introduction to CNNs\n",
"\n",
"**Week 2, Day 2: Convnets and DL Thinking**\n",
"\n",
"**By Neuromatch Academy**\n",
"\n",
"__Content creators:__ Dawn Estes McKnight, Richard Gerum, Cassidy Pirlot, Rohan Saha, Liam Peet-Pare, Saeed Najafi, Alona Fyshe\n",
"\n",
"__Content reviewers:__ Saeed Salehi, Lily Cheng, Yu-Fang Yang, Polina Turishcheva, Bettina Hein, Kelson Shilling-Scrivo\n",
"\n",
"__Content editors:__ Gagana B, Nina Kudryashova, Anmol Gupta, Xiaoxiong Lin, Spiros Chavlis\n",
"\n",
"__Production editors:__ Alex Tran-Van-Minh, Gagana B, Spiros Chavlis\n",
"\n",
"
\n",
"\n",
"*Based on material from:* Konrad Kording, Hmrishav Bandyopadhyay, Rahul Shekhar, Tejas Srivastava"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Tutorial Objectives\n",
"At the end of this tutorial, we will be able to:\n",
"- Define what convolution is\n",
"- Implement convolution as an operation\n",
"\n",
"In the Bonus materials of this tutorial, you will be able to:\n",
"\n",
"- Train a CNN by writing your own train loop\n",
"- Recognize the symptoms of overfitting and how to cure them\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"remove-input"
]
},
"outputs": [],
"source": [
"# @markdown\n",
"from IPython.display import IFrame\n",
"from ipywidgets import widgets\n",
"out = widgets.Output()\n",
"with out:\n",
" print(f\"If you want to download the slides: https://osf.io/download/s8xz5/\")\n",
" display(IFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/s8xz5/?direct%26mode=render%26action=download%26mode=render\", width=730, height=410))\n",
"display(out)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install dependencies\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Install dependencies\n",
"!pip install Pillow --quiet"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install and import feedback gadget\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Install and import feedback gadget\n",
"\n",
"!pip3 install vibecheck datatops --quiet\n",
"\n",
"from vibecheck import DatatopsContentReviewContainer\n",
"def content_review(notebook_section: str):\n",
" return DatatopsContentReviewContainer(\n",
" \"\", # No text prompt\n",
" notebook_section,\n",
" {\n",
" \"url\": \"https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab\",\n",
" \"name\": \"neuromatch_dl\",\n",
" \"user_key\": \"f379rz8y\",\n",
" },\n",
" ).render()\n",
"\n",
"\n",
"feedback_prefix = \"W2D2_T1\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"# Imports\n",
"import time\n",
"import torch\n",
"import scipy.signal\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"\n",
"import torchvision.transforms as transforms\n",
"import torchvision.datasets as datasets\n",
"from torch.utils.data import DataLoader\n",
"\n",
"from tqdm.notebook import tqdm, trange\n",
"from PIL import Image"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Figure Settings\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Figure Settings\n",
"import logging\n",
"logging.getLogger('matplotlib.font_manager').disabled = True\n",
"\n",
"import ipywidgets as widgets # Interactive display\n",
"%matplotlib inline\n",
"%config InlineBackend.figure_format = 'retina'\n",
"plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Helper functions\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Helper functions\n",
"from scipy.signal import correlate2d\n",
"import zipfile, gzip, shutil, tarfile\n",
"\n",
"\n",
"def download_data(fname, folder, url, tar):\n",
" \"\"\"\n",
" Data downloading from OSF.\n",
"\n",
" Args:\n",
" fname : str\n",
" The name of the archive\n",
" folder : str\n",
" The name of the destination folder\n",
" url : str\n",
" The download url\n",
" tar : boolean\n",
" `tar=True` the archive is `fname`.tar.gz, `tar=False` is `fname`.zip\n",
"\n",
" Returns:\n",
" Nothing.\n",
" \"\"\"\n",
"\n",
" if not os.path.exists(folder):\n",
" print(f'\\nDownloading {folder} dataset...')\n",
" r = requests.get(url, allow_redirects=True)\n",
" with open(fname, 'wb') as fh:\n",
" fh.write(r.content)\n",
" print(f'\\nDownloading {folder} completed.')\n",
"\n",
" print('\\nExtracting the files...\\n')\n",
" if not tar:\n",
" with zipfile.ZipFile(fname, 'r') as fz:\n",
" fz.extractall()\n",
" else:\n",
" with tarfile.open(fname) as ft:\n",
" ft.extractall()\n",
" # Remove the archive\n",
" os.remove(fname)\n",
"\n",
" # Extract all .gz files\n",
" foldername = folder + '/raw/'\n",
" for filename in os.listdir(foldername):\n",
" # Remove the extension\n",
" fname = filename.replace('.gz', '')\n",
" # Gunzip all files\n",
" with gzip.open(foldername + filename, 'rb') as f_in:\n",
" with open(foldername + fname, 'wb') as f_out:\n",
" shutil.copyfileobj(f_in, f_out)\n",
" os.remove(foldername+filename)\n",
" else:\n",
" print(f'{folder} dataset has already been downloaded.\\n')\n",
"\n",
"\n",
"def check_shape_function(func, image_shape, kernel_shape):\n",
" \"\"\"\n",
" Helper function to check shape implementation\n",
"\n",
" Args:\n",
" func: f.__name__\n",
" Function name\n",
" image_shape: tuple\n",
" Image shape\n",
" kernel_shape: tuple\n",
" Kernel shape\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" correct_shape = correlate2d(np.random.rand(*image_shape), np.random.rand(*kernel_shape), \"valid\").shape\n",
" user_shape = func(image_shape, kernel_shape)\n",
" if correct_shape != user_shape:\n",
" print(f\"❌ Your calculated output shape is not correct.\")\n",
" else:\n",
" print(f\"✅ Output for image_shape: {image_shape} and kernel_shape: {kernel_shape}, output_shape: {user_shape}, is correct.\")\n",
"\n",
"\n",
"def check_conv_function(func, image, kernel):\n",
" \"\"\"\n",
" Helper function to check conv_function\n",
"\n",
" Args:\n",
" func: f.__name__\n",
" Function name\n",
" image: np.ndarray\n",
" Image matrix\n",
" kernel_shape: np.ndarray\n",
" Kernel matrix\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" solution_user = func(image, kernel)\n",
" solution_scipy = correlate2d(image, kernel, \"valid\")\n",
" result_right = (solution_user == solution_scipy).all()\n",
" if result_right:\n",
" print(\"✅ The function calculated the convolution correctly.\")\n",
" else:\n",
" print(\"❌ The function did not produce the right output.\")\n",
" print(\"For the input matrix:\")\n",
" print(image)\n",
" print(\"and the kernel:\")\n",
" print(kernel)\n",
" print(\"the function returned:\")\n",
" print(solution_user)\n",
" print(\"the correct output would be:\")\n",
" print(solution_scipy)\n",
"\n",
"\n",
"def check_pooling_net(net, device='cpu'):\n",
" \"\"\"\n",
" Helper function to check pooling output\n",
"\n",
" Args:\n",
" net: nn.module\n",
" Net instance\n",
" device: string\n",
" GPU/CUDA if available, CPU otherwise.\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" x_img = emnist_train[x_img_idx][0].unsqueeze(dim=0).to(device)\n",
" output_x = net(x_img)\n",
" output_x = output_x.squeeze(dim=0).detach().cpu().numpy()\n",
"\n",
" right_output = [\n",
" [0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,\n",
" 0.000000, 0.000000, 0.000000, 0.000000, 0.000000],\n",
" [0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,\n",
" 0.000000, 0.000000, 0.000000, 0.000000, 0.000000],\n",
" [9.309552, 1.6216984, 0.000000, 0.000000, 0.000000, 0.000000, 2.2708383,\n",
" 2.6654134, 1.2271233, 0.000000, 0.000000, 0.000000],\n",
" [12.873457, 13.318945, 9.46229, 4.663746, 0.000000, 0.000000, 1.8889914,\n",
" 0.31068993, 0.000000, 0.000000, 0.000000, 0.000000],\n",
" [0.000000, 8.354934, 10.378724, 16.882853, 18.499334, 4.8546696, 0.000000,\n",
" 0.000000, 0.000000, 6.29296, 5.096506, 0.000000],\n",
" [0.000000, 0.000000, 0.31068993, 5.7074604, 9.984148, 4.12916, 8.10037,\n",
" 7.667609, 0.000000, 0.000000, 1.2780352, 0.000000],\n",
" [0.000000, 2.436305, 3.9764223, 0.000000, 0.000000, 0.000000, 12.98801,\n",
" 17.1756, 17.531992, 11.664275, 1.5453291, 0.000000],\n",
" [4.2691708, 2.3217516, 0.000000, 0.000000, 1.3798618, 0.05612564, 0.000000,\n",
" 0.000000, 11.218788, 16.360992, 13.980816, 8.354935],\n",
" [1.8126211, 0.000000, 0.000000, 2.9199777, 3.9382377, 0.000000, 0.000000,\n",
" 0.000000, 0.000000, 0.000000, 6.076582, 10.035061],\n",
" [0.000000, 0.92164516, 4.434638, 0.7816348, 0.000000, 0.000000, 0.000000,\n",
" 0.000000, 0.000000, 0.000000, 0.000000, 0.83254766],\n",
" [0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,\n",
" 0.000000, 0.000000, 0.000000, 0.000000, 0.000000],\n",
" [0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000, 0.000000,\n",
" 0.000000, 0.000000, 0.000000, 0.000000, 0.000000]\n",
" ]\n",
"\n",
" right_shape = (3, 12, 12)\n",
"\n",
" if output_x.shape != right_shape:\n",
" print(f\"❌ Your output does not have the right dimensions. Your output is {output_x.shape} the expected output is {right_shape}\")\n",
" elif (output_x[0] != right_output).all():\n",
" print(\"❌ Your output is not right.\")\n",
" else:\n",
" print(\"✅ Your network produced the correct output.\")\n",
"\n",
"\n",
"# Just returns accuracy on test data\n",
"def test(model, device, data_loader):\n",
" \"\"\"\n",
" Test function\n",
"\n",
" Args:\n",
" net: nn.module\n",
" Net instance\n",
" device: string\n",
" GPU/CUDA if available, CPU otherwise.\n",
" data_loader: torch.loader\n",
" Test loader\n",
"\n",
" Returns:\n",
" acc: float\n",
" Test accuracy\n",
" \"\"\"\n",
" model.eval()\n",
" correct = 0\n",
" total = 0\n",
" for data in data_loader:\n",
" inputs, labels = data\n",
" inputs = inputs.to(device).float()\n",
" labels = labels.to(device).long()\n",
"\n",
" outputs = model(inputs)\n",
" _, predicted = torch.max(outputs, 1)\n",
" total += labels.size(0)\n",
" correct += (predicted == labels).sum().item()\n",
"\n",
" acc = 100 * correct / total\n",
" return f\"{acc}%\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plotting Functions\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Plotting Functions\n",
"\n",
"def display_image_from_greyscale_array(matrix, title):\n",
" \"\"\"\n",
" Display image from greyscale array\n",
"\n",
" Args:\n",
" matrix: np.ndarray\n",
" Image\n",
" title: string\n",
" Title of plot\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" _matrix = matrix.astype(np.uint8)\n",
" _img = Image.fromarray(_matrix, 'L')\n",
" plt.figure(figsize=(3, 3))\n",
" plt.imshow(_img, cmap='gray', vmin=0, vmax=255) # Using 220 instead of 255 so the examples show up better\n",
" plt.title(title)\n",
" plt.axis('off')\n",
"\n",
"\n",
"def make_plots(original, actual_convolution, solution):\n",
" \"\"\"\n",
" Function to build original image/obtained solution and actual convolution\n",
"\n",
" Args:\n",
" original: np.ndarray\n",
" Image\n",
" actual_convolution: np.ndarray\n",
" Expected convolution output\n",
" solution: np.ndarray\n",
" Obtained convolution output\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" display_image_from_greyscale_array(original, \"Original Image\")\n",
" display_image_from_greyscale_array(actual_convolution, \"Convolution result\")\n",
" display_image_from_greyscale_array(solution, \"Your solution\")\n",
"\n",
"\n",
"def plot_loss_accuracy(train_loss, train_acc,\n",
" validation_loss, validation_acc):\n",
" \"\"\"\n",
" Code to plot loss and accuracy\n",
"\n",
" Args:\n",
" train_loss: list\n",
" Log of training loss\n",
" validation_loss: list\n",
" Log of validation loss\n",
" train_acc: list\n",
" Log of training accuracy\n",
" validation_acc: list\n",
" Log of validation accuracy\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" epochs = len(train_loss)\n",
" fig, (ax1, ax2) = plt.subplots(1, 2)\n",
" ax1.plot(list(range(epochs)), train_loss, label='Training Loss')\n",
" ax1.plot(list(range(epochs)), validation_loss, label='Validation Loss')\n",
" ax1.set_xlabel('Epochs')\n",
" ax1.set_ylabel('Loss')\n",
" ax1.set_title('Epoch vs Loss')\n",
" ax1.legend()\n",
"\n",
" ax2.plot(list(range(epochs)), train_acc, label='Training Accuracy')\n",
" ax2.plot(list(range(epochs)), validation_acc, label='Validation Accuracy')\n",
" ax2.set_xlabel('Epochs')\n",
" ax2.set_ylabel('Accuracy')\n",
" ax2.set_title('Epoch vs Accuracy')\n",
" ax2.legend()\n",
" fig.set_size_inches(15.5, 5.5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set random seed\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" Executing `set_seed(seed=seed)` you are setting the seed\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Set random seed\n",
"\n",
"# @markdown Executing `set_seed(seed=seed)` you are setting the seed\n",
"\n",
"# For DL its critical to set the random seed so that students can have a\n",
"# baseline to compare their results to expected results.\n",
"# Read more here: https://pytorch.org/docs/stable/notes/randomness.html\n",
"\n",
"# Call `set_seed` function in the exercises to ensure reproducibility.\n",
"import random\n",
"import torch\n",
"\n",
"def set_seed(seed=None, seed_torch=True):\n",
" \"\"\"\n",
" Function that controls randomness.\n",
" NumPy and random modules must be imported.\n",
"\n",
" Args:\n",
" seed : Integer\n",
" A non-negative integer that defines the random state. Default is `None`.\n",
" seed_torch : Boolean\n",
" If `True` sets the random seed for pytorch tensors, so pytorch module\n",
" must be imported. Default is `True`.\n",
"\n",
" Returns:\n",
" Nothing.\n",
" \"\"\"\n",
" if seed is None:\n",
" seed = np.random.choice(2 ** 32)\n",
" random.seed(seed)\n",
" np.random.seed(seed)\n",
" if seed_torch:\n",
" torch.manual_seed(seed)\n",
" torch.cuda.manual_seed_all(seed)\n",
" torch.cuda.manual_seed(seed)\n",
" torch.backends.cudnn.benchmark = False\n",
" torch.backends.cudnn.deterministic = True\n",
"\n",
" print(f'Random seed {seed} has been set.')\n",
"\n",
"\n",
"# In case that `DataLoader` is used\n",
"def seed_worker(worker_id):\n",
" \"\"\"\n",
" DataLoader will reseed workers following randomness in\n",
" multi-process data loading algorithm.\n",
"\n",
" Args:\n",
" worker_id: integer\n",
" ID of subprocess to seed. 0 means that\n",
" the data will be loaded in the main process\n",
" Refer: https://pytorch.org/docs/stable/data.html#data-loading-randomness for more details\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" worker_seed = torch.initial_seed() % 2**32\n",
" np.random.seed(worker_seed)\n",
" random.seed(worker_seed)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set device (GPU or CPU). Execute `set_device()`\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Set device (GPU or CPU). Execute `set_device()`\n",
"# especially if torch modules used.\n",
"\n",
"# Inform the user if the notebook uses GPU or CPU.\n",
"\n",
"def set_device():\n",
" \"\"\"\n",
" Set the device. CUDA if available, CPU otherwise\n",
"\n",
" Args:\n",
" None\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
" if device != \"cuda\":\n",
" print(\"WARNING: For this notebook to perform best, \"\n",
" \"if possible, in the menu under `Runtime` -> \"\n",
" \"`Change runtime type.` select `GPU` \")\n",
" else:\n",
" print(\"GPU is enabled in this notebook.\")\n",
"\n",
" return device"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"SEED = 2021\n",
"set_seed(seed=SEED)\n",
"DEVICE = set_device()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Section 0: Recap the Experience from Last Week\n",
"\n",
"*Time estimate: ~15mins*"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Last week you learned a lot! Recall that overparametrized ANNs are efficient universal approximators, but also that ANNs can memorize our data. However, regularization can help ANNs to better generalize. You were introduced to several regularization techniques such as *L1*, *L2*, *Data Augmentation*, and *Dropout*.\n",
"\n",
"Today we'll be talking about other ways to simplify ANNs, by making smart changes to their architecture."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Video 1: Introduction to CNNs and RNNs\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"remove-input"
]
},
"outputs": [],
"source": [
"# @title Video 1: Introduction to CNNs and RNNs\n",
"from ipywidgets import widgets\n",
"from IPython.display import YouTubeVideo\n",
"from IPython.display import IFrame\n",
"from IPython.display import display\n",
"\n",
"\n",
"class PlayVideo(IFrame):\n",
" def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n",
" self.id = id\n",
" if source == 'Bilibili':\n",
" src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n",
" elif source == 'Osf':\n",
" src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n",
" super(PlayVideo, self).__init__(src, width, height, **kwargs)\n",
"\n",
"\n",
"def display_videos(video_ids, W=400, H=300, fs=1):\n",
" tab_contents = []\n",
" for i, video_id in enumerate(video_ids):\n",
" out = widgets.Output()\n",
" with out:\n",
" if video_ids[i][0] == 'Youtube':\n",
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
" height=H, fs=fs, rel=0)\n",
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
" else:\n",
" video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n",
" height=H, fs=fs, autoplay=False)\n",
" if video_ids[i][0] == 'Bilibili':\n",
" print(f'Video available at https://www.bilibili.com/video/{video.id}')\n",
" elif video_ids[i][0] == 'Osf':\n",
" print(f'Video available at https://osf.io/{video.id}')\n",
" display(video)\n",
" tab_contents.append(out)\n",
" return tab_contents\n",
"\n",
"\n",
"video_ids = [('Youtube', '5598K-hS89A'), ('Bilibili', 'BV1cL411p7rz')]\n",
"tab_contents = display_videos(video_ids, W=730, H=410)\n",
"tabs = widgets.Tab()\n",
"tabs.children = tab_contents\n",
"for i in range(len(tab_contents)):\n",
" tabs.set_title(i, video_ids[i][0])\n",
"display(tabs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Introduction_to_CNNs_and_RNNs_Video\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Think! 0: Regularization & effective number of params\n",
"Let's think back to last week, when you learned about regularization. Recall that regularization comes in several forms. For example, L1 regularization adds a term to the loss function that penalizes based on the sum of the _absolute_ magnitude of the weights. Below are the results from training a simple multilayer perceptron with one hidden layer (b) on a simple toy dataset (a).\n",
"\n",
"Below that are two graphics that show the effect of regularization on both the number of non-zero weights (d), and on the network's accuracy (c).\n",
"\n",
"What do you notice?\n",
"\n",
"**Note**: Dense layers are the same as fully-connected layers. And pytorch calls them linear layers. Confusing, but now you know!\n",
"\n",
"\n",
"
\n",
"\n",
"Adopted from A. Zhang, Z. C. Lipton, M. Li and A. J. Smola, _[Dive into Deep Learning](http://d2l.ai/chapter_convolutional-neural-networks/conv-layer.html)_.\n",
"\n",
"
\n",
"\n",
"**Note:** You need to run the cell to activate the sliders, and again to run once changing the sliders.\n",
"\n",
"**Tip:** In this animation, and all the ones that follow, you can hover over the parts of the code underlined in red to change them.\n",
"\n",
"**Tip:** Below, the function is called `Conv2d` because the convolutional filter is a matrix with two dimensions (2D). There are also 1D and 3D convolutions, but we won't talk about them today."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Interactive Demo 2: Visualization of Convolution"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"**Important:** Change the bool variable `run_demo` to `True` by ticking the box, in order to experiment with the demo. Due to video rendering on jupyter-book, we had to remove it from the automatic execution."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" *Run this cell to enable the widget!*\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @markdown *Run this cell to enable the widget!*\n",
"\n",
"from IPython.display import HTML\n",
"\n",
"id_html = 2\n",
"url = f'https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W2D2_ConvnetsAndDlThinking/static/interactive_demo{id_html}.html'\n",
"run_demo = False # @param {type:\"boolean\"}\n",
"if run_demo:\n",
" display(HTML(url))"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"#### Definitional Note\n",
"\n",
"\n",
"If you have a background in signal processing or math, you may have already heard of convolution. However, the definitions in other domains and the one we use here are slightly different. The more common definition involves flipping the kernel horizontally and vertically before sliding.\n",
"\n",
"**For our purposes, no flipping is needed. If you are familiar with conventions involving flipping, just assume the kernel is pre-flipped.**\n",
"\n",
"In more general usage, the no-flip operation that we call convolution is known as _cross-correlation_ (hence the usage of `scipy.signal.correlate2d` in the next exercise). Early papers used the more common definition of convolution, but not using a flip is easier to visualize, and in fact the lack of flip does not impact a CNN's ability to learn."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 2.1: Convolution of a Simple Kernel\n",
"At its core, convolution is just repeatedly multiplying a matrix, known as a _kernel_ or _filter_, with some other, larger matrix (in our case the pixels of an image). Consider the below image and kernel:\n",
"\n",
"\\begin{align}\n",
"\\textbf{Image} &=\n",
"\\begin{bmatrix}0 & 200 & 200 \\\\0 & 0 & 200 \\\\ 0 & 0 & 0\n",
"\\end{bmatrix} \\\\ \\\\\n",
"\\textbf{Kernel} &=\n",
"\\begin{bmatrix} \\frac{1}{4} &\\frac{1}{4} \\\\\\frac{1}{4} & \\frac{1}{4}\n",
"\\end{bmatrix}\n",
"\\end{align}\n",
"\n",
"Perform (by hand) the operations needed to convolve the kernel and image above. Afterwards enter your results in the \"solution\" section in the code below. Think about what this specific kernel is doing to the original image.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"def conv_check():\n",
" \"\"\"\n",
" Demonstration of convolution operation\n",
"\n",
" Args:\n",
" None\n",
"\n",
" Returns:\n",
" original: np.ndarray\n",
" Image\n",
" actual_convolution: np.ndarray\n",
" Expected convolution output\n",
" solution: np.ndarray\n",
" Obtained convolution output\n",
" kernel: np.ndarray\n",
" Kernel\n",
" \"\"\"\n",
" ####################################################################\n",
" # Fill in missing code below (the elements of the matrix),\n",
" # then remove or comment the line below to test your function\n",
" raise NotImplementedError(\"Fill in the solution matrix, then delete this\")\n",
" ####################################################################\n",
" # Write the solution array and call the function to verify it!\n",
" solution = ...\n",
"\n",
" original = np.array([\n",
" [0, 200, 200],\n",
" [0, 0, 200],\n",
" [0, 0, 0]\n",
" ])\n",
"\n",
" kernel = np.array([\n",
" [0.25, 0.25],\n",
" [0.25, 0.25]\n",
" ])\n",
"\n",
" actual_convolution = scipy.signal.correlate2d(original, kernel, mode=\"valid\")\n",
"\n",
" if (solution == actual_convolution).all():\n",
" print(\"✅ Your solution is correct!\\n\")\n",
" else:\n",
" print(\"❌ Your solution is incorrect.\\n\")\n",
"\n",
" return original, kernel, actual_convolution, solution\n",
"\n",
"\n",
"\n",
"## Uncomment to test your solution!\n",
"# original, kernel, actual_convolution, solution = conv_check()\n",
"# make_plots(original, actual_convolution, solution)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main/tutorials/W2D2_ConvnetsAndDlThinking/solutions/W2D2_Tutorial1_Solution_78a81e50.py)\n",
"\n",
"*Example output:*\n",
"\n",
"\n",
"\n",
"
\n",
"\n",
"
\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Convolution_of_a_simple_kernel_Exercise\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 2.2: Convolution Output Size\n",
"\n",
"Now, you have manually calculated a convolution. How did this change the shape of the output? When you know the shapes of the input matrix and kernel, what is the shape of the output?\n",
"\n",
"**Hint:** If you have problems figuring out what the output shape should look like, go back to the visualisation and see how the output shape changes as you modify the image and kernel size."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"def calculate_output_shape(image_shape, kernel_shape):\n",
" \"\"\"\n",
" Helper function to calculate output shape\n",
"\n",
" Args:\n",
" image_shape: tuple\n",
" Image shape\n",
" kernel_shape: tuple\n",
" Kernel shape\n",
"\n",
" Returns:\n",
" output_height: int\n",
" Output Height\n",
" output_width: int\n",
" Output Width\n",
" \"\"\"\n",
" image_height, image_width = image_shape\n",
" kernel_height, kernel_width = kernel_shape\n",
" ####################################################################\n",
" # Fill in missing code below, then remove or comment the line below to test your function\n",
" raise NotImplementedError(\"Fill in the lines below, then delete this\")\n",
" ####################################################################\n",
" output_height = ...\n",
" output_width = ...\n",
" return output_height, output_width\n",
"\n",
"\n",
"\n",
"# Here we check if your function works correcly by applying it to different image\n",
"# and kernel shapes\n",
"# check_shape_function(calculate_output_shape, image_shape=(3, 3), kernel_shape=(2, 2))\n",
"# check_shape_function(calculate_output_shape, image_shape=(3, 4), kernel_shape=(2, 3))\n",
"# check_shape_function(calculate_output_shape, image_shape=(5, 5), kernel_shape=(5, 5))\n",
"# check_shape_function(calculate_output_shape, image_shape=(10, 20), kernel_shape=(3, 2))\n",
"# check_shape_function(calculate_output_shape, image_shape=(100, 200), kernel_shape=(40, 30))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main/tutorials/W2D2_ConvnetsAndDlThinking/solutions/W2D2_Tutorial1_Solution_7c652c63.py)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Convolution_output_size_Exercise\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Coding Exercise 2.3: Coding a Convolution\n",
"\n",
"Here, we have the skeleton of a function that performs convolution using the provided image and kernel matrices.\n",
"\n",
"*Exercise:* Fill in the missing lines of code. You can test your function by uncommenting the sections beneath it.\n",
"\n",
"Note: in more general situations, once you understand convolutions, you can use functions already available in `pytorch`/`numpy` to perform convolution (such as `scipy.signal.correlate2d` or `scipy.signal.convolve2d`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"def convolution2d(image, kernel):\n",
" \"\"\"\n",
" Convolves a 2D image matrix with a kernel matrix.\n",
"\n",
" Args:\n",
" image: np.ndarray\n",
" Image\n",
" kernel: np.ndarray\n",
" Kernel\n",
"\n",
" Returns:\n",
" output: np.ndarray\n",
" Output of convolution\n",
" \"\"\"\n",
"\n",
" # Get the height/width of the image, kernel, and output\n",
" im_h, im_w = image.shape\n",
" ker_h, ker_w = kernel.shape\n",
" out_h = im_h - ker_h + 1\n",
" out_w = im_w - ker_w + 1\n",
"\n",
" # Create an empty matrix in which to store the output\n",
" output = np.zeros((out_h, out_w))\n",
"\n",
" # Iterate over the different positions at which to apply the kernel,\n",
" # storing the results in the output matrix\n",
" for out_row in range(out_h):\n",
" for out_col in range(out_w):\n",
" # Overlay the kernel on part of the image\n",
" # (multiply each element of the kernel with some element of the image, then sum)\n",
" # to determine the output of the matrix at a point\n",
" current_product = 0\n",
" for i in range(ker_h):\n",
" for j in range(ker_w):\n",
" ####################################################################\n",
" # Fill in missing code below (...),\n",
" # then remove or comment the line below to test your function\n",
" raise NotImplementedError(\"Implement the convolution function\")\n",
" ####################################################################\n",
" current_product += ...\n",
"\n",
" output[out_row, out_col] = current_product\n",
"\n",
" return output\n",
"\n",
"\n",
"\n",
"## Tests\n",
"# First, we test the parameters we used before in the manual-calculation example\n",
"image = np.array([[0, 200, 200], [0, 0, 200], [0, 0, 0]])\n",
"kernel = np.array([[0.25, 0.25], [0.25, 0.25]])\n",
"# check_conv_function(convolution2d, image, kernel)\n",
"\n",
"# Next, we test with a different input and kernel (the numbers 1-9 and 1-4)\n",
"image = np.arange(9).reshape(3, 3)\n",
"kernel = np.arange(4).reshape(2, 2)\n",
"# check_conv_function(convolution2d, image, kernel)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main/tutorials/W2D2_ConvnetsAndDlThinking/solutions/W2D2_Tutorial1_Solution_4f643447.py)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Coding_a_Convolution_Exercise\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Convolution on the Chicago Skyline\n",
"\n",
"After you have finished programming the above convolution function, run the coding cell below, which applies two different kernels to a greyscale picture of Chicago and takes the geometric average of the results.\n",
"\n",
"**Make sure you remove all print statements from your convolution2d implementation, or this will run for a _very_ long time.** It should take somewhere between 10 seconds and 1 minute.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" ### Load images (run me)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @markdown ### Load images (run me)\n",
"\n",
"import requests, os\n",
"\n",
"if not os.path.exists('images/'):\n",
" os.mkdir('images/')\n",
"\n",
"url = \"https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W2D2_ConvnetsAndDlThinking/static/chicago_skyline_shrunk_v2.bmp\"\n",
"r = requests.get(url, allow_redirects=True)\n",
"with open(\"images/chicago_skyline_shrunk_v2.bmp\", 'wb') as fd:\n",
" fd.write(r.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"# Visualize the output of your function\n",
"from IPython.display import display as IPydisplay\n",
"\n",
"with open(\"images/chicago_skyline_shrunk_v2.bmp\", 'rb') as skyline_image_file:\n",
" img_skyline_orig = Image.open(skyline_image_file)\n",
" img_skyline_mat = np.asarray(img_skyline_orig)\n",
" kernel_ver = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]])\n",
" kernel_hor = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]).T\n",
" img_processed_mat_ver = convolution2d(img_skyline_mat, kernel_ver)\n",
" img_processed_mat_hor = convolution2d(img_skyline_mat, kernel_hor)\n",
" img_processed_mat = np.sqrt(np.multiply(img_processed_mat_ver,\n",
" img_processed_mat_ver) + \\\n",
" np.multiply(img_processed_mat_hor,\n",
" img_processed_mat_hor))\n",
"\n",
" img_processed_mat *= 255.0/img_processed_mat.max()\n",
" img_processed_mat = img_processed_mat.astype(np.uint8)\n",
" img_processed = Image.fromarray(img_processed_mat, 'L')\n",
" width, height = img_skyline_orig.size\n",
" scale = 0.6\n",
" IPydisplay(img_skyline_orig.resize((int(width*scale), int(height*scale))),\n",
" Image.NEAREST)\n",
" IPydisplay(img_processed.resize((int(width*scale), int(height*scale))),\n",
" Image.NEAREST)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Pretty cool, right? We will go into more detail on what's happening in the next section."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Section 2.1: Demonstration of a CNN in PyTorch\n",
"At this point, you should have a fair idea of how to perform a convolution on an image given a kernel. In the following cell, we provide a code snippet that demonstrates setting up a convolutional network using PyTorch.\n",
"\n",
"We look at the `nn` module in PyTorch. The `nn` module contains a plethora of functions that will make implementing a neural network easier. In particular we will look at the `nn.Conv2d()` function, which creates a convolutional layer that is applied to whatever image that you feed the resulting network.\n",
"\n",
"Look at the code below. In it, we define a `Net` class that you can instantiate with a kernel to create a Neural Network object. When you apply the network object to an image (or anything in the form of a matrix), it convolves the kernel over that image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"class Net(nn.Module):\n",
" \"\"\"\n",
" A convolutional neural network class.\n",
" When an instance of it is constructed with a kernel, you can apply that instance\n",
" to a matrix and it will convolve the kernel over that image.\n",
" i.e. Net(kernel)(image)\n",
" \"\"\"\n",
"\n",
" def __init__(self, kernel=None, padding=0):\n",
" super(Net, self).__init__()\n",
" \"\"\"\n",
" Summary of the nn.conv2d parameters (you can also get this by hovering\n",
" over the method):\n",
" - in_channels (int): Number of channels in the input image\n",
" - out_channels (int): Number of channels produced by the convolution\n",
" - kernel_size (int or tuple): Size of the convolving kernel\n",
"\n",
" Args:\n",
" padding: int or tuple, optional\n",
" Zero-padding added to both sides of the input. Default: 0\n",
" kernel: np.ndarray\n",
" Convolving kernel. Default: None\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" self.conv1 = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=2,\n",
" padding=padding)\n",
"\n",
" # Set up a default kernel if a default one isn't provided\n",
" if kernel is not None:\n",
" dim1, dim2 = kernel.shape[0], kernel.shape[1]\n",
" kernel = kernel.reshape(1, 1, dim1, dim2)\n",
"\n",
" self.conv1.weight = torch.nn.Parameter(kernel)\n",
" self.conv1.bias = torch.nn.Parameter(torch.zeros_like(self.conv1.bias))\n",
"\n",
" def forward(self, x):\n",
" \"\"\"\n",
" Forward Pass of nn.conv2d\n",
"\n",
" Args:\n",
" x: torch.tensor\n",
" Input features\n",
"\n",
" Returns:\n",
" x: torch.tensor\n",
" Convolution output\n",
" \"\"\"\n",
" x = self.conv1(x)\n",
" return x"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"# Format a default 2x2 kernel of numbers from 0 through 3\n",
"kernel = torch.Tensor(np.arange(4).reshape(2, 2))\n",
"\n",
"# Prepare the network with that default kernel\n",
"net = Net(kernel=kernel, padding=0).to(DEVICE)\n",
"\n",
"# Set up a 3x3 image matrix of numbers from 0 through 8\n",
"image = torch.Tensor(np.arange(9).reshape(3, 3))\n",
"image = image.reshape(1, 1, 3, 3).to(DEVICE) # BatchSize X Channels X Height X Width\n",
"\n",
"print(\"Image:\\n\" + str(image))\n",
"print(\"Kernel:\\n\" + str(kernel))\n",
"output = net(image) # Apply the convolution\n",
"print(\"Output:\\n\" + str(output))"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"As a quick aside, notice the difference in the input and output size. The input had a size of 3×3, and the output is of size 2×2. This is because of the fact that the kernel can't produce values for the edges of the image - when it slides to an end of the image and is centered on a border pixel, it overlaps space outside of the image that is undefined. If we don't want to lose that information, we will have to pad the image with some defaults (such as 0s) on the border. This process is, somewhat predictably, called *padding*. We will talk more about padding in the next section."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"print(\"Image (before padding):\\n\" + str(image))\n",
"print(\"Kernel:\\n\" + str(kernel))\n",
"\n",
"# Prepare the network with the aforementioned default kernel, but this\n",
"# time with padding\n",
"net = Net(kernel=kernel, padding=1).to(DEVICE)\n",
"output = net(image) # Apply the convolution onto the padded image\n",
"print(\"Output:\\n\" + str(output))"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Section 2.2: Padding and Edge Detection"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Before we start in on the exercises, here's a visualization to help you think about padding."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Interactive Demo 2.2: Visualization of Convolution with Padding and Stride\n",
"\n",
"\n",
"Recall that\n",
"* Padding adds rows and columns of zeros to the outside edge of an image\n",
"* Stride length adjusts the distance by which a filter is shifted after each convolution.\n",
"\n",
"Change the padding and stride and see how this affects the shape of the output. How does the padding need to be configured to maintain the shape of the input?"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"**Important:** Change the bool variable `run_demo` to `True` by ticking the box, in order to experiment with the demo. Due to video rendering on jupyter-book, we had to remove it from the automatic execution."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" *Run this cell to enable the widget!*\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @markdown *Run this cell to enable the widget!*\n",
"\n",
"from IPython.display import HTML\n",
"\n",
"id_html = 2.2\n",
"url = f'https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W2D2_ConvnetsAndDlThinking/static/interactive_demo{id_html}.html'\n",
"run_demo = False # @param {type:\"boolean\"}\n",
"if run_demo:\n",
" display(HTML(url))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Visualization_of_Convolution_with_Padding_and_Stride_Interactive_Demo\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Think! 2.2.1: Edge Detection"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"One of the simpler tasks performed by a convolutional layer is edge detection; that is, finding a place in the image where there is a large and abrupt change in color. Edge-detecting filters are usually learned by the first layers in a CNN. Observe the following simple kernel and discuss whether this will detect vertical edges (where the trace of the edge is vertical; i.e. there is a boundary between left and right), or whether it will detect horizontal edges (where the trace of the edge is horizontal; i.e., there is a boundary between top and bottom).\n",
"\n",
"\\begin{equation}\n",
"\\textbf{Kernel} =\n",
"\\begin{bmatrix} 1 & -1 \\\\ 1 & -1\n",
"\\end{bmatrix}\n",
"\\end{equation}"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main/tutorials/W2D2_ConvnetsAndDlThinking/solutions/W2D2_Tutorial1_Solution_309474b2.py)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Edge_Detection_Discussion\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Consider the image below, which has a black vertical stripe with white on the side. This is like a very zoomed-in vertical edge within an image!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"# Prepare an image that's basically just a vertical black stripe\n",
"X = np.ones((6, 8))\n",
"X[:, 2:6] = 0\n",
"print(X)\n",
"plt.imshow(X, cmap=plt.get_cmap('gray'))\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"# Format the image that's basically just a vertical stripe\n",
"image = torch.from_numpy(X)\n",
"image = image.reshape(1, 1, 6, 8) # BatchSize X Channels X Height X Width\n",
"\n",
"# Prepare a 2x2 kernel with 1s in the first column and -1s in the\n",
"# This exact kernel was discussed above!\n",
"kernel = torch.Tensor([[1.0, -1.0], [1.0, -1.0]])\n",
"net = Net(kernel=kernel)\n",
"\n",
"# Apply the kernel to the image and prepare for display\n",
"processed_image = net(image.float())\n",
"processed_image = processed_image.reshape(5, 7).detach().numpy()\n",
"print(processed_image)\n",
"plt.imshow(processed_image, cmap=plt.get_cmap('gray'))\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"As you can see, this kernel detects vertical edges (the black stripe corresponds to a highly positive result, while the white stripe corresponds to a highly negative result. However, to display the image, all the pixels are normalized between 0=black and 1=white)."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Think! 2.2.2 Kernel structure\n",
"\n",
"If the kernel were transposed (i.e., the columns become rows and the rows become columns), what would the kernel detect? What would be produced by running this kernel on the vertical edge image above?"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main/tutorials/W2D2_ConvnetsAndDlThinking/solutions/W2D2_Tutorial1_Solution_7cc3340b.py)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Kernel_structure_Discussion\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"---\n",
"# Section 3: Kernels, Pooling and Subsampling\n",
"\n",
"*Time estimate: ~50mins*"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"To visualize the various components of a CNN, we will build a simple CNN step by step. Recall that the MNIST dataset consists of binarized images of handwritten digits. This time, we will use the EMNIST letters dataset, which consists of binarized images of handwritten characters $(A, ..., Z)$.\n",
"\n",
"We will simplify the problem further by only keeping the images that correspond to $X$ (labeled as `24` in the dataset) and $O$ (labeled as `15` in the dataset). Then, we will train a CNN to classify an image either an $X$ or an $O$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download EMNIST dataset\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Download EMNIST dataset\n",
"\n",
"# webpage: https://www.itl.nist.gov/iaui/vip/cs_links/EMNIST/gzip.zip\n",
"fname = 'EMNIST.zip'\n",
"folder = 'EMNIST'\n",
"url = \"https://osf.io/xwfaj/download\"\n",
"download_data(fname, folder, url, tar=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dataset/DataLoader Functions *(Run me!)*\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Dataset/DataLoader Functions *(Run me!)*\n",
"\n",
"def get_Xvs0_dataset(normalize=False, download=False):\n",
" \"\"\"\n",
" Load Dataset\n",
"\n",
" Args:\n",
" normalize: boolean\n",
" If true, normalise dataloader\n",
" download: boolean\n",
" If true, download dataset\n",
"\n",
" Returns:\n",
" emnist_train: torch.loader\n",
" Training Data\n",
" emnist_test: torch.loader\n",
" Test Data\n",
" \"\"\"\n",
" if normalize:\n",
" transform = transforms.Compose([\n",
" transforms.ToTensor(),\n",
" transforms.Normalize((0.1307,), (0.3081,))\n",
" ])\n",
" else:\n",
" transform = transforms.Compose([\n",
" transforms.ToTensor(),\n",
" ])\n",
"\n",
" emnist_train = datasets.EMNIST(root='.',\n",
" split='letters',\n",
" download=download,\n",
" train=True,\n",
" transform=transform)\n",
" emnist_test = datasets.EMNIST(root='.',\n",
" split='letters',\n",
" download=download,\n",
" train=False,\n",
" transform=transform)\n",
"\n",
" # Only want O (15) and X (24) labels\n",
" train_idx = (emnist_train.targets == 15) | (emnist_train.targets == 24)\n",
" emnist_train.targets = emnist_train.targets[train_idx]\n",
" emnist_train.data = emnist_train.data[train_idx]\n",
"\n",
" # Convert Xs predictions to 1, Os predictions to 0\n",
" emnist_train.targets = (emnist_train.targets == 24).type(torch.int64)\n",
"\n",
" test_idx = (emnist_test.targets == 15) | (emnist_test.targets == 24)\n",
" emnist_test.targets = emnist_test.targets[test_idx]\n",
" emnist_test.data = emnist_test.data[test_idx]\n",
"\n",
" # Convert Xs predictions to 1, Os predictions to 0\n",
" emnist_test.targets = (emnist_test.targets == 24).type(torch.int64)\n",
"\n",
" return emnist_train, emnist_test\n",
"\n",
"\n",
"def get_data_loaders(train_dataset, test_dataset,\n",
" batch_size=32, seed=0):\n",
" \"\"\"\n",
" Helper function to fetch dataloaders\n",
"\n",
" Args:\n",
" train_dataset: torch.tensor\n",
" Training data\n",
" test_dataset: torch.tensor\n",
" Test data\n",
" batch_size: int\n",
" Batch Size\n",
" seed: int\n",
" Set seed for reproducibility\n",
"\n",
" Returns:\n",
" emnist_train: torch.loader\n",
" Training Data\n",
" emnist_test: torch.loader\n",
" Test Data\n",
" \"\"\"\n",
" g_seed = torch.Generator()\n",
" g_seed.manual_seed(seed)\n",
"\n",
" train_loader = DataLoader(train_dataset,\n",
" batch_size=batch_size,\n",
" shuffle=True,\n",
" num_workers=2,\n",
" worker_init_fn=seed_worker,\n",
" generator=g_seed)\n",
" test_loader = DataLoader(test_dataset,\n",
" batch_size=batch_size,\n",
" shuffle=True,\n",
" num_workers=2,\n",
" worker_init_fn=seed_worker,\n",
" generator=g_seed)\n",
"\n",
" return train_loader, test_loader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"emnist_train, emnist_test = get_Xvs0_dataset(normalize=False, download=False)\n",
"train_loader, test_loader = get_data_loaders(emnist_train, emnist_test,\n",
" seed=SEED)\n",
"\n",
"# Index of an image in the dataset that corresponds to an X and O\n",
"x_img_idx = 4\n",
"o_img_idx = 15"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Let's view a couple samples from the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(12, 6))\n",
"ax1.imshow(emnist_train[0][0].reshape(28, 28), cmap='gray')\n",
"ax2.imshow(emnist_train[10][0].reshape(28, 28), cmap='gray')\n",
"ax3.imshow(emnist_train[4][0].reshape(28, 28), cmap='gray')\n",
"ax4.imshow(emnist_train[6][0].reshape(28, 28), cmap='gray')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Interactive Demo 3: Visualization of Convolution with Multiple Filters\n",
"\n",
"Change the number of input channels (e.g., the color channels of an image or the output channels of a previous layer) and the output channels (number of different filters to apply)."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"**Important:** Change the bool variable `run_demo` to `True` by ticking the box, in order to experiment with the demo. Due to video rendering on jupyter-book, we had to remove it from the automatic execution."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" *Run this cell to enable the widget!*\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @markdown *Run this cell to enable the widget!*\n",
"\n",
"from IPython.display import HTML\n",
"\n",
"id_html = 3\n",
"url = f'https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W2D2_ConvnetsAndDlThinking/static/interactive_demo{id_html}.html'\n",
"run_demo = False # @param {type:\"boolean\"}\n",
"if run_demo:\n",
" display(HTML(url))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Visualization_of_Convolution_with_Multiple_Filters_Interactive_Demo\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Section 3.1: Multiple Filters\n",
"\n",
"The following network sets up 3 filters and runs them on an image of the dataset from the $X$ class. Note that we are using \"thicker\" filters than those presented in the videos. Here, the filters are $5 \\times 5$, whereas in the videos $3 \\times 3$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"class Net2(nn.Module):\n",
" \"\"\"\n",
" Neural Network instance\n",
" \"\"\"\n",
"\n",
" def __init__(self, padding=0):\n",
" \"\"\"\n",
" Initialize parameters of Net2\n",
"\n",
" Args:\n",
" padding: int or tuple, optional\n",
" Zero-padding added to both sides of the input. Default: 0\n",
"\n",
" Returns:\n",
" Nothing\n",
" \"\"\"\n",
" super(Net2, self).__init__()\n",
" self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=5,\n",
" padding=padding)\n",
"\n",
" # First kernel - leading diagonal\n",
" kernel_1 = torch.Tensor([[[1., 1., -1., -1., -1.],\n",
" [1., 1., 1., -1., -1.],\n",
" [-1., 1., 1., 1., -1.],\n",
" [-1., -1., 1., 1., 1.],\n",
" [-1., -1., -1., 1., 1.]]])\n",
"\n",
" # Second kernel - other diagonal\n",
" kernel_2 = torch.Tensor([[[-1., -1., -1., 1., 1.],\n",
" [-1., -1., 1., 1., 1.],\n",
" [-1., 1., 1., 1., -1.],\n",
" [1., 1., 1., -1., -1.],\n",
" [1., 1., -1., -1., -1.]]])\n",
"\n",
" # tThird kernel - checkerboard pattern\n",
" kernel_3 = torch.Tensor([[[1., 1., -1., 1., 1.],\n",
" [1., 1., 1., 1., 1.],\n",
" [-1., 1., 1., 1., -1.],\n",
" [1., 1., 1., 1., 1.],\n",
" [1., 1., -1., 1., 1.]]])\n",
"\n",
"\n",
" # Stack all kernels in one tensor with (3, 1, 5, 5) dimensions\n",
" multiple_kernels = torch.stack([kernel_1, kernel_2, kernel_3], dim=0)\n",
"\n",
" self.conv1.weight = torch.nn.Parameter(multiple_kernels)\n",
"\n",
" # Negative bias\n",
" self.conv1.bias = torch.nn.Parameter(torch.Tensor([-4, -4, -12]))\n",
"\n",
" def forward(self, x):\n",
" \"\"\"\n",
" Forward Pass of Net2\n",
"\n",
" Args:\n",
" x: torch.tensor\n",
" Input features\n",
"\n",
" Returns:\n",
" x: torch.tensor\n",
" Convolution output\n",
" \"\"\"\n",
" x = self.conv1(x)\n",
" return x"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"**Note:** We add a negative bias to give a threshold to select the high output value, which corresponds to the features we want to detect (e.g., 45 degree oriented bar).\n",
"\n",
"Now, let's visualize the filters using the code given below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"net2 = Net2().to(DEVICE)\n",
"fig, (ax11, ax12, ax13) = plt.subplots(1, 3)\n",
"# Show the filters\n",
"ax11.set_title(\"filter 1\")\n",
"ax11.imshow(net2.conv1.weight[0, 0].detach().cpu().numpy(), cmap=\"gray\")\n",
"ax12.set_title(\"filter 2\")\n",
"ax12.imshow(net2.conv1.weight[1, 0].detach().cpu().numpy(), cmap=\"gray\")\n",
"ax13.set_title(\"filter 3\")\n",
"ax13.imshow(net2.conv1.weight[2, 0].detach().cpu().numpy(), cmap=\"gray\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Think! 3.1: Do you see how these filters would help recognize an `X`?"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"execution": {}
},
"source": [
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main/tutorials/W2D2_ConvnetsAndDlThinking/solutions/W2D2_Tutorial1_Solution_800ed014.py)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Multiple_Filters_Discussion\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"We apply the filters to the images."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"net2 = Net2().to(DEVICE)\n",
"x_img = emnist_train[x_img_idx][0].unsqueeze(dim=0).to(DEVICE)\n",
"output_x = net2(x_img)\n",
"output_x = output_x.squeeze(dim=0).detach().cpu().numpy()\n",
"\n",
"o_img = emnist_train[o_img_idx][0].unsqueeze(dim=0).to(DEVICE)\n",
"output_o = net2(o_img)\n",
"output_o = output_o.squeeze(dim=0).detach().cpu().numpy()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Let us view the image of $X$ and $O$ and what the output of the filters applied to them looks like. Pay special attention to the areas with very high vs. very low output patterns."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"fig, ((ax11, ax12, ax13, ax14),\n",
" (ax21, ax22, ax23, ax24),\n",
" (ax31, ax32, ax33, ax34)) = plt.subplots(3, 4)\n",
"\n",
"# Show the filters\n",
"ax11.axis(\"off\")\n",
"ax12.set_title(\"filter 1\")\n",
"ax12.imshow(net2.conv1.weight[0, 0].detach().cpu().numpy(), cmap=\"gray\")\n",
"ax13.set_title(\"filter 2\")\n",
"ax13.imshow(net2.conv1.weight[1, 0].detach().cpu().numpy(), cmap=\"gray\")\n",
"ax14.set_title(\"filter 3\")\n",
"ax14.imshow(net2.conv1.weight[2, 0].detach().cpu().numpy(), cmap=\"gray\")\n",
"\n",
"vmin, vmax = -6, 10\n",
"# Show x and the filters applied to x\n",
"ax21.set_title(\"image x\")\n",
"ax21.imshow(emnist_train[x_img_idx][0].reshape(28, 28), cmap='gray')\n",
"ax22.set_title(\"output filter 1\")\n",
"ax22.imshow(output_x[0], cmap='gray', vmin=vmin, vmax=vmax)\n",
"ax23.set_title(\"output filter 2\")\n",
"ax23.imshow(output_x[1], cmap='gray', vmin=vmin, vmax=vmax)\n",
"ax24.set_title(\"output filter 3\")\n",
"ax24.imshow(output_x[2], cmap='gray', vmin=vmin, vmax=vmax)\n",
"\n",
"# Show o and the filters applied to o\n",
"ax31.set_title(\"image o\")\n",
"ax31.imshow(emnist_train[o_img_idx][0].reshape(28, 28), cmap='gray')\n",
"ax32.set_title(\"output filter 1\")\n",
"ax32.imshow(output_o[0], cmap='gray', vmin=vmin, vmax=vmax)\n",
"ax33.set_title(\"output filter 2\")\n",
"ax33.imshow(output_o[1], cmap='gray', vmin=vmin, vmax=vmax)\n",
"ax34.set_title(\"output filter 3\")\n",
"ax34.imshow(output_o[2], cmap='gray', vmin=vmin, vmax=vmax)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Section 3.2: ReLU after convolutions\n",
"\n",
"Up until now we've talked about the convolution operation, which is linear. But the real strength of neural networks comes from the incorporation of non-linear functions. Furthermore, in the real world, we often have learning problems where the relationship between the input and output is non-linear and complex.\n",
"\n",
"The ReLU (Rectified Linear Unit) introduces non-linearity into our model, allowing us to learn a more complex function that can better predict the class of an image.\n",
"\n",
"The ReLU function is shown below.\n",
"\n",
"
\n",
"\n",
"\n",
"
\n",
"\n",
"[Here](https://stats.stackexchange.com/a/226927) you can found a discussion which talks about how ReLU is useful as an activation funciton.\n",
"\n",
"[Here](https://stats.stackexchange.com/questions/126238/what-are-the-advantages-of-relu-over-sigmoid-function-in-deep-neural-networks?sfb=2) you can found a another excellent discussion about the advantages of using ReLU."
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"## Section 3.3: Pooling\n",
"\n",
"Convolutional layers create feature maps that summarize the presence of particular features (e.g., edges) in the input. However, these feature maps record the _precise_ position of features in the input. That means that small changes to the position of an object in an image can result in a very different feature map. But a cup is a cup (and an $X$ is an $X$) no matter where it appears in the image! We need to achieve _translational invariance_.\n",
"\n",
"A common approach to this problem is called downsampling. Downsampling creates a lower-resolution version of an image, retaining the large structural elements and removing some of the fine detail that may be less relevant to the task. In CNNs, Max-Pooling and Average-Pooling are used to downsample. These operations shrink the size of the hidden layers, and produce features that are more translationally invariant, which can be better leveraged by subsequent layers."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Video 4: Pooling\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"remove-input"
]
},
"outputs": [],
"source": [
"# @title Video 4: Pooling\n",
"from ipywidgets import widgets\n",
"from IPython.display import YouTubeVideo\n",
"from IPython.display import IFrame\n",
"from IPython.display import display\n",
"\n",
"\n",
"class PlayVideo(IFrame):\n",
" def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n",
" self.id = id\n",
" if source == 'Bilibili':\n",
" src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n",
" elif source == 'Osf':\n",
" src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n",
" super(PlayVideo, self).__init__(src, width, height, **kwargs)\n",
"\n",
"\n",
"def display_videos(video_ids, W=400, H=300, fs=1):\n",
" tab_contents = []\n",
" for i, video_id in enumerate(video_ids):\n",
" out = widgets.Output()\n",
" with out:\n",
" if video_ids[i][0] == 'Youtube':\n",
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
" height=H, fs=fs, rel=0)\n",
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
" else:\n",
" video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n",
" height=H, fs=fs, autoplay=False)\n",
" if video_ids[i][0] == 'Bilibili':\n",
" print(f'Video available at https://www.bilibili.com/video/{video.id}')\n",
" elif video_ids[i][0] == 'Osf':\n",
" print(f'Video available at https://osf.io/{video.id}')\n",
" display(video)\n",
" tab_contents.append(out)\n",
" return tab_contents\n",
"\n",
"\n",
"video_ids = [('Youtube', 'XOss-NUlpo0'), ('Bilibili', 'BV1264y1z7JZ')]\n",
"tab_contents = display_videos(video_ids, W=730, H=410)\n",
"tabs = widgets.Tab()\n",
"tabs.children = tab_contents\n",
"for i in range(len(tab_contents)):\n",
" tabs.set_title(i, video_ids[i][0])\n",
"display(tabs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit your feedback\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {},
"tags": [
"hide-input"
]
},
"outputs": [],
"source": [
"# @title Submit your feedback\n",
"content_review(f\"{feedback_prefix}_Pooling_Video\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Like convolutional layers, pooling layers have fixed-shape windows (pooling windows) that are systematically applied to the input. As with filters, we can change the shape of the window and the size of the stride. And, just like with filters, every time we apply a pooling operation we produce a single output.\n",
"\n",
"Pooling performs a kind of information compression that provides summary statistics for a _neighborhood_ of the input.\n",
"- In Maxpooling, we compute the maximum value of all pixels in the pooling window.\n",
"- In Avgpooling, we compute the average value of all pixels in the pooling window.\n",
"\n",
"The example below shows the result of Maxpooling within the yellow pooling windows to create the red pooling output matrix.\n",
"\n",
"\n",
"
\n",
"
\"\"\" + code1 + \"\"\"\n", "
\"\"\" + code2 + \"\"\"\n", "