Open In Colab   Open in Kaggle

Tutorial 2: Natural Language Processing and LLMs#

Week 3, Day 1: Time Series and Natural Language Processing

By Neuromatch Academy

Content creators: Lyle Ungar, Jordan Matelsky, Konrad Kording, Shaonan Wang

Content reviewers: Shaonan Wang, Weizhe Yuan, Dalia Nasr, Stephen Kiilu, Alish Dipani, Dora Zhiyu Yang, Adrita Das

Content editors: Konrad Kording, Shaonan Wang

Production editors: Konrad Kording, Spiros Chavlis


Tutorial Objectives#

This tutorial provides a comprehensive overview of modern natural language processing (NLP). It introduces two influential NLP architectures, BERT and GPT, along with a detailed exploration of the underlying NLP pipeline. Participants will learn about the core concepts, functionalities, and applications of these architectures, as well as gain insights into prompt engineering and the current and future developments of GPT.


Setup#

Install dependencies#

WARNING: There may be errors and/or warnings reported during the installation. However, they are to be ignored.

Hide code cell source
# @title Install dependencies
# @markdown **WARNING**: There may be *errors* and/or *warnings* reported during the installation. However, they are to be ignored.
!pip3 install gensim==4.3.1 --quiet
!pip3 install pytorch_lightning --quiet
!pip3 install typing_extensions --quiet
!pip install accelerate --quiet
!pip3 install datasets --quiet
!pip3 install transformers==4.28.0 --quiet
!pip3 install evaluate --quiet

Install and import feedback gadget#

Hide code cell source
# @title Install and import feedback gadget

!pip3 install vibecheck datatops --quiet

from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
    return DatatopsContentReviewContainer(
        "",  # No text prompt
        notebook_section,
        {
            "url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
            "name": "neuromatch_dl",
            "user_key": "f379rz8y",
        },
    ).render()


feedback_prefix = "W3D1_T2"
# Imports
import random
import numpy as np
from typing import Iterable, List
from tqdm.notebook import tqdm
from typing import Dict
import pytorch_lightning as pl

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset
from tokenizers import Tokenizer, Regex, models, normalizers, pre_tokenizers, trainers, processors

Set random seed#

Executing set_seed(seed=seed) you are setting the seed

Hide code cell source
# @title Set random seed

# @markdown Executing `set_seed(seed=seed)` you are setting the seed

# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html

# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import numpy as np

def set_seed(seed=None):
  if seed is None:
    seed = np.random.choice(2 ** 32)
  random.seed(seed)
  np.random.seed(seed)
  print(f'Random seed {seed} has been set.')


set_seed(seed=2023)  # change 2023 with any number you like
Random seed 2023 has been set.

Set device (GPU or CPU). Execute set_device()#

Hide code cell source
# @title Set device (GPU or CPU). Execute `set_device()`

# Inform the user if the notebook uses GPU or CPU.

def set_device():
  """
  Set the device. CUDA if available, CPU otherwise

  Args:
    None

  Returns:
    Nothing
  """
  device = "cuda" if torch.cuda.is_available() else "cpu"
  if device != "cuda":
    print("WARNING: For this notebook to perform best, "
        "if possible, in the menu under `Runtime` -> "
        "`Change runtime type.`  select `GPU` ")
  else:
    print("GPU is enabled in this notebook.")

  return device
DEVICE = set_device()
SEED = 2021
set_seed(seed=SEED)
WARNING: For this notebook to perform best, if possible, in the menu under `Runtime` -> `Change runtime type.`  select `GPU` 
Random seed 2021 has been set.

Section 1: NLP architectures#

From RNN/LSTM to Transformers.

Video 1: Intro to NLPs and LLMs#

A core principle of Natural Language Processing is embedding words as vectors. In the relevant vector space, words with similar meanings are close to one another.

In classical transformer systems, a core principle is encoding and decoding. We can encode an input sequence as a vector (that implicitly codes what we just read). And we can then take this vector and decode it, e.g., as a new sentence. So a sequence-to-sequence (e.g., sentence translation) system may read a sentence (made out of words embedded in a relevant space) and encode it as an overall vector. It then takes the resulting encoding of the sentence and decodes it into a translated sentence.

In modern transformer systems, such as GPT, all words are used parallelly. In that sense, the transformers generalize the encoding/decoding idea. Examples of this strategy include all the modern large language models (such as GPT).

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Intro_to_NLPs_and_LLMs_Video")

Section 2: The NLP pipeline#

Tokenize, pretrain, fine-tune

Video 2: NLP pipeline#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_NLP_pipeline_Video")

Tokenizers#

Today we will practise embedding techniques, and continue our march toward large language models and transformers by discussing one of the critical developments of the modern NLP stack: Tokenization. Tokenizers convert inputs as a set of discrete tokens.

Learning Goals#

  • Understand the concept of tokenization and why it is useful.

  • Learn how to write a tokenizer from scratch, taking advantage of context.

  • Get an intuition for how modern tokenizers work by playing with a few pre-trained tokenizers from industry.

Generating a dataset#

As we continue to move closer to “production-grade” NLP, we’ll start to use industry standards such as the HuggingFace library. Huggingface is a large company that facilitates the exchange of aspects of modern deep learning systems.

We’ll start by generating a training dataset. hf has a convenient datasets module that allows us to download a variety of datasets, including the Wikipedia text corpus. We’ll use this to generate a dataset of text from Wikipedia.

from datasets import load_dataset

dataset = load_dataset("wikitext", "wikitext-103-raw-v1", split="train")
print(dataset[41492])
{'text': ' Gray wolves howl to assemble the pack ( usually before and after hunts ) , to pass on an alarm ( particularly at a den site ) , to locate each other during a storm or unfamiliar territory and to communicate across great distances . Wolf howls can under certain conditions be heard over areas of up to 130 km2 ( 50 sq mi ) . Wolf howls are generally indistinguishable from those of large dogs . Male wolves give voice through an octave , passing to a deep bass with a stress on " O " , while females produce a modulated nasal baritone with stress on " U " . Pups almost never howl , while yearling wolves produce howls ending in a series of dog @-@ like yelps . Howling consists of a fundamental frequency that may lie between 150 and 780 Hz , and consists of up to 12 harmonically related overtones . The pitch usually remains constant or varies smoothly , and may change direction as many as four or five times . Howls used for calling pack mates to a kill are long , smooth sounds similar to the beginning of the cry of a horned owl . When pursuing prey , they emit a higher pitched howl , vibrating on two notes . When closing in on their prey , they emit a combination of a short bark and a howl . When howling together , wolves harmonize rather than chorus on the same note , thus creating the illusion of there being more wolves than there actually are . Lone wolves typically avoid howling in areas where other packs are present . Wolves from different geographic locations may howl in different fashions : the howls of European wolves are much more protracted and melodious than those of North American wolves , whose howls are louder and have a stronger emphasis on the first syllable . The two are however mutually intelligible , as North American wolves have been recorded to respond to European @-@ style howls made by biologists . \n'}
def generate_n_examples(dataset, n=512):
  """
  Produce a generator that yields n examples at a time from the dataset.
  """
  for i in range(0, len(dataset), n):
    yield dataset[i:i + n]['text']

Now we will create the actual Tokenizer, adhering to the hf.Tokenizer protocol. (Adhering to a standard protocol enables us to swap in our tokenizer for any tokenizer in the huggingface ecosystem or to apply our own tokenizer to any model in the huggingface ecosystem.)

Let’s sketch out the steps of writing a Tokenizer. We need to solve two problems:

  • Given a string, split it into a list of tokens.

  • If you don’t recognize a word, still figure out a way to tokenize it!

This may feel like we’re reinventing our one-hot encoder with a richer vocabulary. Why is it that the One-Hot-Encoder, which outputs a vector of length \(|V|\), where \(|V|\) is the size of our vocabulary, is not sufficient, but a tokenizer that outputs a list of indices into a vocabulary of size \(|V|\) is sufficient? The answer is that while our encoder was responsible for embedding words into a high-dimensional space, our tokenizer is NOT; the “win” of a tokenizer is that it breaks up a string into in-vocab elements. For certain workflows, the very next step might be adding an embedder onto the end of the tokenizer. (As we’ll soon see, this is exactly the strategy employed by modern Transformer models.)

Tokens will almost always be different from words; for example, we might want to split “don’t” into “do” and “n’t”, or we might want to split “don’t” into “do” and “not”. Or we might even want to split “don’t” into “d”, “o”, “n”, and “t”. We can choose any strategy we want here; , unlike Word2Vec, our tokenizer will NOT be limited to outputting one vector per English word. Here, we’ll use an off-the-shelf subword splitter, which we discuss below.

# Try playing with these hyperparameters!
VOCAB_SIZE = 12_000
# Create a tokenizer object that uses the "WordPiece" model. The WorkPiece model
# is a subword tokenizer that uses a vocabulary of common words and word pieces
# to tokenize text. The "unk_token" parameter specifies the token to use for
# unknown tokens, i.e. tokens that are not in the vocabulary. (Remember that the
# vocabulary will be built from our dataset, so it will include subchunks of
# English words.)
tokenizer = Tokenizer(models.WordPiece(unk_token="[UNK]"))

Tokenizer Features#

Now let’s start dressing up our tokenizer with some useful features. First, let’s clean up the text. This process is formally called “normalization” and is a critical step in any NLP pipeline. We’ll remove punctuation and then convert all the text to lowercase. We’ll also remove diacritics (accents) from the text.

# Think of a Normalizer Sequence the same way you would think of a PyTorch
# Sequential model. It is a sequence of normalizers that are applied to the
# text before tokenization, in the order that they are added to the sequence.

tokenizer.normalizer = normalizers.Sequence([
    normalizers.Replace(Regex(r"[\s]"), " "), # Convert all whitespace to single space
    normalizers.Lowercase(), # Convert all text to lowercase
    normalizers.NFD(), # Decompose all characters into their base characters
    normalizers.StripAccents(), # Remove all accents
])

Next, we’ll add a pre-tokenizer. The pre-tokenizer is applied to the text after normalizing it but before it’s tokenized. The pre-tokenizer is useful for splitting text into chunks, which are easier to tokenize. For example, we can split text into chunks separated by punctuation or whitespace.

tokenizer.pre_tokenizer = pre_tokenizers.Sequence([
    pre_tokenizers.WhitespaceSplit(), # Split on whitespace
    pre_tokenizers.Digits(individual_digits=True), # Split digits into individual tokens
    pre_tokenizers.Punctuation(), # Split punctuation into individual tokens
])

Finally, we’ll train the tokenizer with our dataset. After all, we want a tokenizer that works well on this dataset. There are a few different algorithms for training tokenizers. Here are two common ones:

  • BPE Algorithm: Start with a vocabulary of each character in the dataset. Examine all pairs from the vocabulary and merge the pair with the highest frequency in the dataset. Repeat until the vocabulary size is reached (so “ee” is more likely to get merged than “zf” in the English corpus).

  • Top-Down WordPiece Algorithm: Generate all substrings of each word from the dataset and count occurrences in the training data. Keep any string that occurs more than a threshold number of times. Repeat this process until the vocabulary size is reached (For a more thorough explanation of this process, see the TensorFlow Guide)

We’ll use WordPiece in the next cell.

tokenizer_trainer = trainers.WordPieceTrainer(
    vocab_size=VOCAB_SIZE,
    # We have to specify the special tokens that we want to use. These will be
    # added to the vocabulary no matter what the vocab-building algorithm does.
    special_tokens=["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"],
    show_progress=True,
)

Those special tokens are important because it tells the WordPiece training process how to treat phrases, masks, and unknown tokens.

Note: We can also add our own special tokens, such as [CITE], to indicate when a citation is about to be used if we want to train a model to predict the presence of citations in a text. Training this will take a bit of time.

sample_ratio = 0.2
keep = int(len(dataset)*sample_ratio)
dataset_small = load_dataset("wikitext", "wikitext-103-raw-v1", split=f"train[:{keep}]")
tokenizer.train_from_iterator(generate_n_examples(dataset_small), trainer=tokenizer_trainer, length=len(dataset_small))



# In "real life", we'd probably want to save the tokenizer to disk so that we
# can use it later. We can do this with the "save" method:
# tokenizer.save("tokenizer.json")

# Let's try it out!
print("Hello, world!")
print(
    *zip(
        tokenizer.encode("Hello, world!").tokens,
        tokenizer.encode("Hello, world!").ids,
    )
)


# Can we also tokenize made-up words?
print(tokenizer.encode("These toastersocks are so groommpy!").tokens)
Hello, world!
('hell', 9140) ('##o', 2277) (',', 16) ('world', 4375) ('!', 5)
['these', 'to', '##aster', '##so', '##ck', '##s', 'are', 'so', 'gro', '##omm', '##p', '##y', '!']

(The ## means that the token is a continuation of the previous chunk.)

Try playing around with the hyperparameters and the tokenizing algorithms to see how they affect the tokenizer’s output. There can be some very major differences!

In summary, we created a tokenizer pipeline that:

  • Normalizes the text (cleans up punctuation and diacritics)

  • Splits the text into chunks (using whitespace and punctuation)

  • Trains the tokenizer on the dataset (using the WordPiece algorithm)

In common use, this would be the first step of any modern NLP pipeline. The next step would be to add an embedder to the end of the tokenizer, so that we can feed in a high-dimensional space to our model. But unlike Word2Vec, we can now separate the tokenization step from the embedding step, which means our encoding/embedding process can be task-specific, custom to our downstream neural net architecture, instead of general-purpose.

Think 2.1! Is it a good idea to do pre_tokenizers?#

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Is_it_a_good_idea_to_do_pre_tokenizers_Discussion")

Think 2.2! Tokenizer good practices#

We established that the tokenizer is a better move than the One-Hot-Encoder because it can handle out-of-vocabulary words. But what if we just made a one-hot encoding where the vocabulary is all possible two-character combinations? Would there still be an advantage to the tokenizer?

Hint: Re-read the section on the BPE and WordPiece algorithms, and how the tokens are selected.

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Tokenizer_good_practices_Discussion")

Think 2.3: Chinese and English tokenizer#

Let’s think about a language like Chinese, where words are each composed of a relatively fewer number of characters compared to English (hungry is six unicode characters, but 饿 is one unicode character), but there are many more unique Chinese characters than there are letters in the English alphabet.

In a one or two sentence high-level sketch, what properties would be desireable for a Chinese tokenizer to have?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Chinese_and_English_tokenizer_Discussion")

Section 3: Using BERT#

In this section, we will learn about using the BERT model from huggingface.

Learning Goals#

  • Understand the idea behind BERT

  • Understand the idea of pre-training and fine-tuning

  • Understand how freezing parts of the network is useful

Video 3: BERT#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_BERT_Video")

Section 4: NLG with GPT#

In this section we will learn about Natural Language Generation with Generative Pretrained Transformers.

Learning goals#

  • How to produce language with GPTs

Video 4: NLG#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_NLG_Video")

Using state-of-the-art (SOTA) Models#

Unless you are writing your own experimental DL research (and sometimes even then!) it is far more common these days to use the HuggingFace model library to import and start working with state-of-the-art models quickly. In this section, we will show you how to do that.

We will download a pretrained model from the hf transformers library that is used to generate text. We will then fine-tune it on a different dataset, using the hf.datasets library and the HuggingFace Trainer classes to make the process as easy as possible, and we’ll see that we can accomplish all of this in just a few lines of easily maintained code.

Ultimately, we will have a working generator… for code!

We’re first going to pick a tokenizer. You can see some of the options here. We’ll use CodeParrot tokenizer, which is a BPE tokenizer. But you can choose (or build!) another if you’d like to try offroading!

from transformers import AutoTokenizer
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small")

Think 4.1! Tokenizers#

Why can you use a different tokenizer than the one that was originally used? What requirements must another tokenizer for this task have?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Tokenizers_Discussion")

Next, we’ll download a pre-built model architecture. CodeParrot (the model) is a GPT-2 model, which is a transformer-based language model. You can see some of the options here. But you can choose (or build!) another!

Note that codeparrot/codeparrot (https://huggingface.co/codeparrot/codeparrot) is about 7GB to download (so it may take a while, or it may be too large for your runtime if you’re on a free Colab). Instead, we will use a smaller model, codeparrot/codeparrot-small (https://huggingface.co/codeparrot/codeparrot-small), which is only ~500MB.

To run everything together — tokenization, model, and de-tokenization, we can use the pipeline function from transformers.

from transformers import AutoModelWithLMHead
from transformers import pipeline

model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot-small")
generation_pipeline = pipeline(
    "text-generation", # The task to run. This tells hf what the pipeline steps are
    model=model, # The model to use; can also pass the string here;
    tokenizer=tokenizer, # The tokenizer to use; can also pass the string name here.
)
input_prompt = '''\
def simple_add(a: int, b: int) -> int:
    """
    Adds two numbers together and returns the result.
    """'''

# Return tensors for PyTorch:
inputs = tokenizer(input_prompt, return_tensors="pt")

Recall that these tokens are integer indices in the vocabulary of the tokenizer. We can use the tokenizer to decode these tokens into a string, which we can print out to see what the model generates.

input_token_ids = inputs["input_ids"]
input_strs = tokenizer.convert_ids_to_tokens(*input_token_ids.tolist())

print(*zip(input_strs, input_token_ids[0]))
('def', tensor(318)) ('Ġsimple', tensor(3486)) ('_', tensor(63)) ('add', tensor(525)) ('(', tensor(8)) ('a', tensor(65)) (':', tensor(26)) ('Ġint', tensor(1109)) (',', tensor(12)) ('Ġb', tensor(330)) (':', tensor(26)) ('Ġint', tensor(1109)) (')', tensor(9)) ('Ġ->', tensor(1035)) ('Ġint', tensor(1109)) (':', tensor(26)) ('ĊĠĠĠ', tensor(272)) ('Ġ"""', tensor(408)) ('ĊĠĠĠ', tensor(272)) ('ĠAdds', tensor(15747)) ('Ġtwo', tensor(2877)) ('Ġnumbers', tensor(5579)) ('Ġtogether', tensor(10451)) ('Ġand', tensor(436)) ('Ġreturns', tensor(2529)) ('Ġthe', tensor(314)) ('Ġresult', tensor(754)) ('.', tensor(14)) ('ĊĠĠĠ', tensor(272)) ('Ġ"""', tensor(408))

(Quick knowledge-check: what are the weirdly-rendering characters representing?)

This model is already ready to use! Let’s give it a try. (Note that we don’t use inputs — we just generated that to show the initial tokenization steps.)

Here, we use the pipeline we created earlier to combine all our components. If you were writing a Copilot-style code-completer, you could get away with wrapping this single line in a nice API and calling it a day!

Play with the hyperparameters and see what kinds of outputs you can get. Temperature measures how much randomness is added to the model’s predictions. Higher temperature means more randomness and lower temperature means less randomness. More randomness in the latent space will lead to wilder predictions and potentially more creative answers. A good place to start is 0.2. You can also try changing the max_length parameter, which controls how long the generated code can be (though the model can opt to put a “stop” token in the middle of the sequence, so it may not always generate exactly this many tokens).

outputs = generation_pipeline(input_prompt, max_length=100, num_return_sequences=1, temperature=0.2)
print(outputs[0]["generated_text"])
def simple_add(a: int, b: int) -> int:
    """
    Adds two numbers together and returns the result.
    """
    return a + b


def simple_sub(a: int, b: int) -> int:
    """
    Subtracts two numbers together and returns the result.
    """
    return a - b


def simple_mul(a: int, b: int) -> int:
    """
    Multiplies two numbers

Let’s see if we can fool our model now! The huggingface documentation tells us that the codeparrot model was trained to generate Python code (docs). Let’s see if we can get it to generate some JavaScript.

input_prompt = "class SimpleAdder {"

print(generation_pipeline(input_prompt, max_length=100, num_return_sequences=1, temperature=0.2)[0]["generated_text"])
class SimpleAdder {
    public:
        class SimpleAdder(Adder):
            def __init__(self, *args, **kwargs):
                super().__init__(*args, **kwargs)
                self.name = 'SimpleAdder'
                self.adder = self.adder_class()
                self.adder.name = 'SimpleAdder'
                self.adder.adder_class = self.adder_class()
                self.adder.

Yikes! I don’t know what it generated for you, but what it made for me was:

class SimpleAdder {
    public:
        class SimpleAdder(object):
            def __init__(self, a, b):
                self.a = a
                self.b = b

            def __call__(self, x):
                return self.a + x

Ew! That’s wrong in a lot of ways. But it’s understandable: Our model can’t really generalize outside of the domain in which it was trained. And so probably there were a few Python files that included syntax of other languages (perhaps generators for other code?). So the model knows that there’s some mysterious syntax that uses curly brackets… But it’s not sure about anything else. (For the programming-language hobbyists among you: The public notation looks to me a lot like the model is trying to do something C-flavored and perhaps something Java-flavored; I like it! But it’s definitely not JavaScript.)

What are the major observations?

  • The syntax it’s generating rapidly and devolves into Python; it can predict only a few characters of non-Python before falling back into its familiar training territory.

  • The part of the code that follows Python syntax is valid and resembles a useful class definition (although if you look closely, it doesn’t seem to do anything useful with the b attribute…). This tells us that the model “understands” its problem domain but hasn’t been trained on the correct data to solve our new problem.

Think 4.2! Using SOTA models#

What are your other observations about the code it generated for you? You’re now aware of how Transformers work.

  1. Think specifically and remark about the observations a machine learning practitioner would make here if your role were to diagnose the error in a production system.

  2. Now, how would a nonexpert user interpret the issues?

  3. Do you think the model-reported confidence for this output would be high, low, or in between…?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Using_SOTA_models_Discussion")

Fine-Tuning#

Alright, so we have a model that can generate code. But now, we want to fine-tune it to generate JavaScript.

Assuming the data will be too large to fit on disk on Colab, we’ll use the load_dataset function to download only part of the dataset. There’s a JavaScript subset to the codeparrot dataset, which we’ll use as an example… But you can use any dataset you like! We recommend filtering datasets by task category (e.g., text generation) to get the most relevant datasets. Still, you can use any dataset you like if you can configure the data loader to use it. (Consider, for example, this one.)

Choose a dataset from the HuggingFace datasets library.

# Unlike _some_ code-generator models on the market, we'll limit our training data by license :)
dataset = load_dataset(
    "codeparrot/github-code",
    streaming=True,
    split="train",
    languages=["JavaScript"],
    licenses=["mit", "isc", "apache-2.0"],
)
# Print the schema of the first example from the training set:
print({k: type(v) for k, v in next(iter(dataset)).items()})
{'code': <class 'str'>, 'repo_name': <class 'str'>, 'path': <class 'str'>, 'language': <class 'str'>, 'license': <class 'str'>, 'size': <class 'int'>}

Like training any model, we need to define a training loop and an evaluation metric.

This is made overwhelmingly easy with the transformers library. Specifically, look below at all of the code you can avoid using the huggingface infrastructure. (In the past, we’ve used PyTorch Lightning, which had a similar training-loop abstraction. Do you have preferences between these two libraries?)

Coding Exercise 4.1: Implement the code to fine-tune the model#

Here are the big pieces of what we do below:

  • Create a TrainingArguments object. This serializable object (i.e., you can save it to memory or disk) makes it easy to train a model reproducibly with the same hyperparameters (this certainly beats having a bunch of global variables in your notebook!).

  • Encode the dataset. This is effectively just passing everything through the tokenizer, with a padding step that fills the end of each sequence with the padding token.

  • Define our metrics. We use the accuracy metric here (look at the 4th line in the code cell).

  • Create a data collator. This function takes a list of examples and returns a batch of examples. The DataCollatorForLanguageModeling class is a convenient way to do this.

  • Create a Trainer object. This class wraps the training loop and makes it easy to train a model. It’s a bit like the Trainer class in PyTorch Lightning, but it’s a bit more flexible and works with non-PyTorch models as well.

from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling
from evaluate import load
metric = load("accuracy")

# Trainer:
training_args = TrainingArguments(
    output_dir="./codeparrot",
    max_steps=100,
    per_device_train_batch_size=1,
)

tokenizer.pad_token = tokenizer.eos_token

encoded_dataset = dataset.map(
    lambda x: tokenizer(x["code"], truncation=True, padding="max_length"),
    batched=True,
    remove_columns=["code"],
)


# Metrics for loss:
def compute_metrics(eval_pred):
  predictions, labels = eval_pred
  predictions = np.argmax(predictions, axis=-1)
  return metric.compute(predictions=predictions, references=labels)


# Data collator:
data_collator = DataCollatorForLanguageModeling(
    tokenizer=tokenizer, mlm=False,
)

trainer = ...
trainer = ...

Click for solution

# Run the actual training:
trainer.train()
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[43], line 2
      1 # Run the actual training:
----> 2 trainer.train()

AttributeError: 'ellipsis' object has no attribute 'train'

Finally, we will try our model on the same code snippet to see how it performs after fine-tuning:

# Move the model to the CPU for inference
model.to("cpu")
print(
    generation_pipeline(
        input_prompt, max_length=100, num_return_sequences=1, temperature=0.2
    )[0]["generated_text"]
)
class SimpleAdder {
    public:
        def __init__(self, *args, **kwargs):
            super(SimpleAdder, self).__init__(*args, **kwargs)
            self.name = 'SimpleAdder'

    def get_name(self):
        return self.name

    def get_description(self):
        return 'SimpleAdder'

    def get_icon(self):
        return'simple-adder'

    def get_icon_url(self):

Of course, your results will be slightly different. Here’s what I got:

class SimpleAdder {
    constructor(a, b) {
        this.a = a;
        this.b = b;
    }

    add(

Much better! The model is no longer generating Python code, and it’s not trying to jam Python-flavored syntax into other languages. It’s still imperfect, but it’s much better than before! (And, of course, remember that this is just a small model, and we didn’t train it for very long. You can either try training it for longer or using a larger model to get better results.)

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_FineTune_the_model_Exercise")

Think 4.3! Accuracy metric observations#

Why might accuracy be a bad metric for this task?

Hint: What does it mean to be “accurate” in this task?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Accuracy_metric_observations_Discussion")

Section 5: GPT Today and Tomorrow#

Limitation of the current models.

Video 5: Conclusion#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Conclusion_Video")

Play around with LLMs#

Try the following questions with ChatGPT (GPT3.5 without access to the web) and with GPTBing in creative mode (GPT4 with access to the web). Note that the latter requires installing Microsoft Edge.

Pick someone you know who is likely to have a web presence but is not super famous (not Musk or Trump). Ask GPT for a two-paragraph biography. How good is it?

Ask it something like “What is the US, UK, Germany, China, and Japan’s per capita income over the past ten years? Plot the data in a single figure” (depending on when and where you run this, you will need to paste the resulting Python code into a colab notebook). Try asking it questions about the data or the definition of “per capita income” used. How good is it?

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Play_around_with_LLMs_Activity")

Summary#

In this tutorial you have become familiar with modern natural language processing (NLP) architectures. We learned about the core concepts, functionalities, and applications of these architectures. We also gain insights into prompt engineering and we learned about GPT.


Daily survey#

Don’t forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there is a small delay before you will be redirected to the survey.

button link to survey


Bonus Section: Using Large Language Models (LLMs)#

This videos tells you what large language models are being used for now and how you can use them. For instance, personalized tutoring, language practice, improving writing, exam preparation, writing help and data science.

Video 6: Using GPT#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_What_models_Video")