Open In Colab   Open in Kaggle

Tutorial 2: Natural Language Processing and LLMs#

Week 3, Day 1: Time Series and Natural Language Processing

By Neuromatch Academy

Content creators: Lyle Ungar, Jordan Matelsky, Konrad Kording, Shaonan Wang, Alish Dipani

Content reviewers: Shaonan Wang, Weizhe Yuan, Dalia Nasr, Stephen Kiilu, Alish Dipani, Dora Zhiyu Yang, Adrita Das

Content editors: Konrad Kording, Shaonan Wang

Production editors: Konrad Kording, Spiros Chavlis, Konstantine Tsafatinos


Tutorial Objectives#

This tutorial provides a comprehensive overview of modern natural language processing (NLP). It introduces two influential NLP architectures, BERT and GPT, along with a detailed exploration of the underlying NLP pipeline. Participants will learn about the core concepts, functionalities, and applications of these architectures, as well as gain insights into prompt engineering and the current and future developments of GPT.


Setup#

Install dependencies#

WARNING: There may be errors and/or warnings reported during the installation. However, they are to be ignored.

Hide code cell source
# @title Install dependencies
# @markdown **WARNING**: There may be *errors* and/or *warnings* reported during the installation. However, they are to be ignored.
!pip3 install gensim==4.3.1 --quiet
!pip3 install pytorch_lightning --quiet
!pip3 install typing_extensions --quiet
!pip install accelerate --quiet
!pip3 install datasets --quiet
!pip3 install transformers==4.28.0 --quiet
!pip3 install evaluate --quiet

Install and import feedback gadget#

Hide code cell source
# @title Install and import feedback gadget

!pip3 install vibecheck datatops --quiet

from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
    return DatatopsContentReviewContainer(
        "",  # No text prompt
        notebook_section,
        {
            "url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
            "name": "neuromatch_dl",
            "user_key": "f379rz8y",
        },
    ).render()


feedback_prefix = "W3D1_T2"
# Imports
import random
import numpy as np
from typing import Iterable, List
from tqdm.notebook import tqdm
from typing import Dict
import pytorch_lightning as pl

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset
from tokenizers import Tokenizer, Regex, models, normalizers, pre_tokenizers, trainers, processors

Set random seed#

Executing set_seed(seed=seed) you are setting the seed

Hide code cell source
# @title Set random seed

# @markdown Executing `set_seed(seed=seed)` you are setting the seed

# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html

# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import numpy as np

def set_seed(seed=None):
  if seed is None:
    seed = np.random.choice(2 ** 32)
  random.seed(seed)
  np.random.seed(seed)
  print(f'Random seed {seed} has been set.')


set_seed(seed=2023)  # change 2023 with any number you like
Random seed 2023 has been set.

Set device (GPU or CPU). Execute set_device()#

Hide code cell source
# @title Set device (GPU or CPU). Execute `set_device()`

# Inform the user if the notebook uses GPU or CPU.

def set_device():
  """
  Set the device. CUDA if available, CPU otherwise

  Args:
    None

  Returns:
    Nothing
  """
  device = "cuda" if torch.cuda.is_available() else "cpu"
  if device != "cuda":
    print("WARNING: For this notebook to perform best, "
        "if possible, in the menu under `Runtime` -> "
        "`Change runtime type.`  select `GPU` ")
  else:
    print("GPU is enabled in this notebook.")

  return device
DEVICE = set_device()
SEED = 2021
set_seed(seed=SEED)
WARNING: For this notebook to perform best, if possible, in the menu under `Runtime` -> `Change runtime type.`  select `GPU` 
Random seed 2021 has been set.

Section 1: NLP architectures#

From RNN/LSTM to Transformers.

Video 1: Intro to NLPs and LLMs#

A core principle of Natural Language Processing is embedding words as vectors. In the relevant vector space, words with similar meanings are close to one another.

In classical transformer systems, a core principle is encoding and decoding. We can encode an input sequence as a vector (that implicitly codes what we just read). And we can then take this vector and decode it, e.g., as a new sentence. So a sequence-to-sequence (e.g., sentence translation) system may read a sentence (made out of words embedded in a relevant space) and encode it as an overall vector. It then takes the resulting encoding of the sentence and decodes it into a translated sentence.

In modern transformer systems, such as GPT, all words are used parallelly. In that sense, the transformers generalize the encoding/decoding idea. Examples of this strategy include all the modern large language models (such as GPT).

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Intro_to_NLPs_and_LLMs_Video")

Section 2: The NLP pipeline#

Tokenize, pretrain, fine-tune

Video 2: NLP pipeline#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_NLP_pipeline_Video")

Tokenizers#

Today we will practise embedding techniques, and continue our march toward large language models and transformers by discussing one of the critical developments of the modern NLP stack: Tokenization. Tokenizers convert inputs as a set of discrete tokens.

Learning Goals#

  • Understand the concept of tokenization and why it is useful.

  • Learn how to write a tokenizer from scratch, taking advantage of context.

  • Get an intuition for how modern tokenizers work by playing with a few pre-trained tokenizers from industry.

Generating a dataset#

As we continue to move closer to “production-grade” NLP, we’ll start to use industry standards such as the HuggingFace library. Huggingface is a large company that facilitates the exchange of aspects of modern deep learning systems.

We’ll start by generating a training dataset. hf has a convenient datasets module that allows us to download a variety of datasets, including the Wikipedia text corpus. We’ll use this to generate a dataset of text from Wikipedia.

from datasets import load_dataset

dataset = load_dataset("wikitext", "wikitext-103-raw-v1", split="train")
README.md: 100%
 10.5k/10.5k [00:00<00:00, 1.14MB/s]
test-00000-of-00001.parquet: 100%
 733k/733k [00:00<00:00, 6.49MB/s]
train-00000-of-00002.parquet: 100%
 157M/157M [00:01<00:00, 115MB/s]
train-00001-of-00002.parquet: 100%
 157M/157M [00:01<00:00, 114MB/s]
validation-00000-of-00001.parquet: 100%
 657k/657k [00:00<00:00, 62.2MB/s]
Generating test split: 100%
 4358/4358 [00:00<00:00, 158824.35 examples/s]
Generating train split: 100%
 1801350/1801350 [00:01<00:00, 913610.58 examples/s]
Generating validation split: 100%
 3760/3760 [00:00<00:00, 265842.64 examples/s]
print(dataset[41492])
{'text': ' Gray wolves howl to assemble the pack ( usually before and after hunts ) , to pass on an alarm ( particularly at a den site ) , to locate each other during a storm or unfamiliar territory and to communicate across great distances . Wolf howls can under certain conditions be heard over areas of up to 130 km2 ( 50 sq mi ) . Wolf howls are generally indistinguishable from those of large dogs . Male wolves give voice through an octave , passing to a deep bass with a stress on " O " , while females produce a modulated nasal baritone with stress on " U " . Pups almost never howl , while yearling wolves produce howls ending in a series of dog @-@ like yelps . Howling consists of a fundamental frequency that may lie between 150 and 780 Hz , and consists of up to 12 harmonically related overtones . The pitch usually remains constant or varies smoothly , and may change direction as many as four or five times . Howls used for calling pack mates to a kill are long , smooth sounds similar to the beginning of the cry of a horned owl . When pursuing prey , they emit a higher pitched howl , vibrating on two notes . When closing in on their prey , they emit a combination of a short bark and a howl . When howling together , wolves harmonize rather than chorus on the same note , thus creating the illusion of there being more wolves than there actually are . Lone wolves typically avoid howling in areas where other packs are present . Wolves from different geographic locations may howl in different fashions : the howls of European wolves are much more protracted and melodious than those of North American wolves , whose howls are louder and have a stronger emphasis on the first syllable . The two are however mutually intelligible , as North American wolves have been recorded to respond to European @-@ style howls made by biologists . \n'}
def generate_n_examples(dataset, n=512):
  """
  Produce a generator that yields n examples at a time from the dataset.
  """
  for i in range(0, len(dataset), n):
    yield dataset[i:i + n]['text']

Now we will create the actual Tokenizer, adhering to the hf.Tokenizer protocol. (Adhering to a standard protocol enables us to swap in our tokenizer for any tokenizer in the huggingface ecosystem or to apply our own tokenizer to any model in the huggingface ecosystem.)

Let’s sketch out the steps of writing a Tokenizer. We need to solve two problems:

  • Given a string, split it into a list of tokens.

  • If you don’t recognize a word, still figure out a way to tokenize it!

This may feel like we’re reinventing our one-hot encoder with a richer vocabulary. Why is it that the One-Hot-Encoder, which outputs a vector of length |V|, where |V| is the size of our vocabulary, is not sufficient, but a tokenizer that outputs a list of indices into a vocabulary of size |V| is sufficient? The answer is that while our encoder was responsible for embedding words into a high-dimensional space, our tokenizer is NOT; the “win” of a tokenizer is that it breaks up a string into in-vocab elements. For certain workflows, the very next step might be adding an embedder onto the end of the tokenizer. (As we’ll soon see, this is exactly the strategy employed by modern Transformer models.)

Tokens will almost always be different from words; for example, we might want to split “don’t” into “do” and “n’t”, or we might want to split “don’t” into “do” and “not”. Or we might even want to split “don’t” into “d”, “o”, “n”, and “t”. We can choose any strategy we want here; , unlike Word2Vec, our tokenizer will NOT be limited to outputting one vector per English word. Here, we’ll use an off-the-shelf subword splitter, which we discuss below.

VOCAB_SIZE = 12_000
# Create a tokenizer object that uses the "WordPiece" model. The WorkPiece model
# is a subword tokenizer that uses a vocabulary of common words and word pieces
# to tokenize text. The "unk_token" parameter specifies the token to use for
# unknown tokens, i.e. tokens that are not in the vocabulary. (Remember that the
# vocabulary will be built from our dataset, so it will include subchunks of
# English words.)
tokenizer = Tokenizer(models.WordPiece(unk_token="[UNK]"))

Tokenizer Features#

Now let’s start dressing up our tokenizer with some useful features. First, let’s clean up the text. This process is formally called “normalization” and is a critical step in any NLP pipeline. We’ll remove punctuation and then convert all the text to lowercase. We’ll also remove diacritics (accents) from the text.

# Think of a Normalizer Sequence the same way you would think of a PyTorch
# Sequential model. It is a sequence of normalizers that are applied to the
# text before tokenization, in the order that they are added to the sequence.

tokenizer.normalizer = normalizers.Sequence([
    normalizers.Replace(Regex(r"[\s]"), " "), # Convert all whitespace to single space
    normalizers.Lowercase(), # Convert all text to lowercase
    normalizers.NFD(), # Decompose all characters into their base characters
    normalizers.StripAccents(), # Remove all accents
])

Next, we’ll add a pre-tokenizer. The pre-tokenizer is applied to the text after normalizing it but before it’s tokenized. The pre-tokenizer is useful for splitting text into chunks, which are easier to tokenize. For example, we can split text into chunks separated by punctuation or whitespace.

tokenizer.pre_tokenizer = pre_tokenizers.Sequence([
    pre_tokenizers.WhitespaceSplit(), # Split on whitespace
    pre_tokenizers.Digits(individual_digits=True), # Split digits into individual tokens
    pre_tokenizers.Punctuation(), # Split punctuation into individual tokens
])

Note: In practice, it is not necessary to use pre-tokenizers, but we use it for demonstration purposes. For instance, “2-3” is not the same as “23”, so removing punctuation or splitting up digits or punctuation is a bad idea! Moreover, the current tokenizer is powerful enough to deal with punctuation.

Finally, we’ll train the tokenizer with our dataset. After all, we want a tokenizer that works well on this dataset. There are a few different algorithms for training tokenizers. Here are two common ones:

  • BPE Algorithm: Start with a vocabulary of each character in the dataset. Examine all pairs from the vocabulary and merge the pair with the highest frequency in the dataset. Repeat until the vocabulary size is reached (so “ee” is more likely to get merged than “zf” in the English corpus).

  • Top-Down WordPiece Algorithm: Generate all substrings of each word from the dataset and count occurrences in the training data. Keep any string that occurs more than a threshold number of times. Repeat this process until the vocabulary size is reached (For a more thorough explanation of this process, see the TensorFlow Guide)

We’ll use WordPiece in the next cell.

tokenizer_trainer = trainers.WordPieceTrainer(
    vocab_size=VOCAB_SIZE,
    # We have to specify the special tokens that we want to use. These will be
    # added to the vocabulary no matter what the vocab-building algorithm does.
    special_tokens=["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"],
    show_progress=True,
)

Special Tokens#

Tokenizers often have special tokens representing certain concepts such as:

  • [PAD]: Added to the end of shorter input sequences to ensure equal input length for the whole batch

  • [START]: Start of the sequence

  • [END]: End of the sequence

  • [UNK]: Unknown characters not present in the vocabulary

  • [BOS]: Beginning of sentence

  • [EOS]: End of sentence

  • [SEP]: Separation between two sentences in a sequence

  • [CLS]: Token used for classification tasks to represent the whole sequence

  • [MASK]: Used in pre-training phase for masked language modeling tasks in models like BERT

Those special tokens are important because it tells the WordPiece training process how to treat phrases, masks, and unknown tokens.

Note: We can also add our own special tokens, such as [CITE], to indicate when a citation is about to be used if we want to train a model to predict the presence of citations in a text. Training this will take a bit of time.

sample_ratio = 0.2
keep = int(len(dataset)*sample_ratio)
dataset_small = load_dataset("wikitext", "wikitext-103-raw-v1", split=f"train[:{keep}]")
tokenizer.train_from_iterator(generate_n_examples(dataset_small), trainer=tokenizer_trainer, length=len(dataset_small))



# In "real life", we'd probably want to save the tokenizer to disk so that we
# can use it later. We can do this with the "save" method:
# tokenizer.save("tokenizer.json")

# Let's try it out!
print("Hello, world!")
print(
    *zip(
        tokenizer.encode("Hello, world!").tokens,
        tokenizer.encode("Hello, world!").ids,
    )
)


# Can we also tokenize made-up words?
print(tokenizer.encode("These toastersocks are so groommpy!").tokens)
Hello, world!
('hell', 9140) ('##o', 2264) (',', 16) ('world', 4375) ('!', 5)
['these', 'to', '##aster', '##so', '##ck', '##s', 'are', 'so', 'gro', '##omm', '##p', '##y', '!']

(The ## means that the token is a continuation of the previous chunk.)

Try playing around with the hyperparameters and the tokenizing algorithms to see how they affect the tokenizer’s output. There can be some very major differences!

In summary, we created a tokenizer pipeline that:

  • Normalizes the text (cleans up punctuation and diacritics)

  • Splits the text into chunks (using whitespace and punctuation)

  • Trains the tokenizer on the dataset (using the WordPiece algorithm)

In common use, this would be the first step of any modern NLP pipeline. The next step would be to add an embedder to the end of the tokenizer, so that we can feed in a high-dimensional space to our model. But unlike Word2Vec, we can now separate the tokenization step from the embedding step, which means our encoding/embedding process can be task-specific, custom to our downstream neural net architecture, instead of general-purpose.

Think 2.1! Tokenizer good practices#

We established that the tokenizer is a better move than the One-Hot-Encoder because it can handle out-of-vocabulary words. But what if we just made a one-hot encoding where the vocabulary is all possible two-character combinations? Would there still be an advantage to the tokenizer?

Hint: Re-read the section on the BPE and WordPiece algorithms, and how the tokens are selected.

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Tokenizer_good_practices_Discussion")

Think 2.2: Chinese and English tokenizer#

Let’s think about a language like Chinese, where words are each composed of a relatively fewer number of characters compared to English (hungry is six unicode characters, but 饿 is one unicode character), but there are many more unique Chinese characters than there are letters in the English alphabet.

In a one or two sentence high-level sketch, what properties would be desireable for a Chinese tokenizer to have?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Chinese_and_English_tokenizer_Discussion")

Section 3: Using BERT#

In this section, we will learn about using the BERT model from huggingface.

Learning Goals#

  • Understand the idea behind BERT

  • Understand the idea of pre-training and fine-tuning

  • Understand how freezing parts of the network is useful

Video 3: BERT#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_BERT_Video")

Section 4: NLG with GPT#

In this section we will learn about Natural Language Generation with Generative Pretrained Transformers.

Learning goals#

  • How to produce language with GPTs

Video 4: NLG#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_NLG_Video")

Using state-of-the-art (SOTA) Models#

Unless you are writing your own experimental DL research (and sometimes even then!) it is far more common these days to use the HuggingFace model library to import and start working with state-of-the-art models quickly. In this section, we will show you how to do that.

We will download a pretrained model from the hf transformers library that is used to generate text. We will then fine-tune it on a different dataset, using the hf.datasets library and the HuggingFace Trainer classes to make the process as easy as possible, and we’ll see that we can accomplish all of this in just a few lines of easily maintained code.

Ultimately, we will have a working generator… for code!

We’re first going to pick a tokenizer. You can see some of the options here. We’ll use CodeParrot tokenizer, which is a BPE tokenizer. But you can choose (or build!) another if you’d like to try offroading!

from transformers import AutoTokenizer
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small")
tokenizer_config.json: 100%
 259/259 [00:00<00:00, 34.4kB/s]
vocab.json: 100%
 497k/497k [00:00<00:00, 2.08MB/s]
merges.txt: 100%
 277k/277k [00:00<00:00, 6.10MB/s]
tokenizer.json: 100%
 840k/840k [00:00<00:00, 9.67MB/s]
special_tokens_map.json: 100%
 90.0/90.0 [00:00<00:00, 12.6kB/s]

Think 4.1! Tokenizers#

Why can you use a different tokenizer than the one that was originally used? What requirements must another tokenizer for this task have?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Tokenizers_Discussion")

Next, we’ll download a pre-built model architecture. CodeParrot (the model) is a GPT-2 model, which is a transformer-based language model. You can see some of the options here. But you can choose (or build!) another!

Note that codeparrot/codeparrot (https://huggingface.co/codeparrot/codeparrot) is about 7GB to download (so it may take a while, or it may be too large for your runtime if you’re on a free Colab). Instead, we will use a smaller model, codeparrot/codeparrot-small (https://huggingface.co/codeparrot/codeparrot-small), which is only ~500MB.

To run everything together — tokenization, model, and de-tokenization, we can use the pipeline function from transformers.

from transformers import AutoModelWithLMHead
from transformers import pipeline

model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot-small")
generation_pipeline = pipeline(
    "text-generation", # The task to run. This tells hf what the pipeline steps are
    model=model, # The model to use; can also pass the string here;
    tokenizer=tokenizer, # The tokenizer to use; can also pass the string name here.
)
config.json: 100%
 903/903 [00:00<00:00, 130kB/s]
pytorch_model.bin: 100%
 457M/457M [00:05<00:00, 86.6MB/s]
input_prompt = '''\
def simple_add(a: int, b: int) -> int:
    """
    Adds two numbers together and returns the result.
    """'''

# Return tensors for PyTorch:
inputs = tokenizer(input_prompt, return_tensors="pt")

Recall that these tokens are integer indices in the vocabulary of the tokenizer. We can use the tokenizer to decode these tokens into a string, which we can print out to see what the model generates.

input_token_ids = inputs["input_ids"]
input_strs = tokenizer.convert_ids_to_tokens(*input_token_ids.tolist())

print(*zip(input_strs, input_token_ids[0]))
('def', tensor(318)) ('Ġsimple', tensor(3486)) ('_', tensor(63)) ('add', tensor(525)) ('(', tensor(8)) ('a', tensor(65)) (':', tensor(26)) ('Ġint', tensor(1109)) (',', tensor(12)) ('Ġb', tensor(330)) (':', tensor(26)) ('Ġint', tensor(1109)) (')', tensor(9)) ('Ġ->', tensor(1035)) ('Ġint', tensor(1109)) (':', tensor(26)) ('ĊĠĠĠ', tensor(272)) ('Ġ"""', tensor(408)) ('ĊĠĠĠ', tensor(272)) ('ĠAdds', tensor(15747)) ('Ġtwo', tensor(2877)) ('Ġnumbers', tensor(5579)) ('Ġtogether', tensor(10451)) ('Ġand', tensor(436)) ('Ġreturns', tensor(2529)) ('Ġthe', tensor(314)) ('Ġresult', tensor(754)) ('.', tensor(14)) ('ĊĠĠĠ', tensor(272)) ('Ġ"""', tensor(408))

(Quick knowledge-check: what are the weirdly-rendering characters representing?)

This model is already ready to use! Let’s give it a try. (Note that we don’t use inputs — we just generated that to show the initial tokenization steps.)

Here, we use the pipeline we created earlier to combine all our components. If you were writing a Copilot-style code-completer, you could get away with wrapping this single line in a nice API and calling it a day!

Play with the hyperparameters and see what kinds of outputs you can get. Temperature measures how much randomness is added to the model’s predictions. Higher temperature means more randomness and lower temperature means less randomness. More randomness in the latent space will lead to wilder predictions and potentially more creative answers. A good place to start is 0.2. You can also try changing the max_length parameter, which controls how long the generated code can be (though the model can opt to put a “stop” token in the middle of the sequence, so it may not always generate exactly this many tokens).

outputs = generation_pipeline(input_prompt, max_length=100, num_return_sequences=1, temperature=0.2)
print(outputs[0]["generated_text"])
def simple_add(a: int, b: int) -> int:
    """
    Adds two numbers together and returns the result.
    """
    return a + b


def simple_mul(a: int, b: int) -> int:
    """
    Multiplies two numbers together and returns the result.
    """
    return a * b


def simple_div(a: int, b: int) -> int:
    """
    Divides two numbers together

Let’s see if we can fool our model now! The huggingface documentation tells us that the codeparrot model was trained to generate Python code (docs). Let’s see if we can get it to generate some JavaScript.

input_prompt = "class SimpleAdder {"

print(generation_pipeline(input_prompt, max_length=100, num_return_sequences=1, temperature=0.2)[0]["generated_text"])
class SimpleAdder {
    public:
        def __init__(self, *args, **kwargs):
            super(SimpleAdder, self).__init__(*args, **kwargs)
            self.name = 'SimpleAdder'
            self.description = 'Simple adder'
            self.type = 'SimpleAdder'
            self.description_type = 'SimpleAdder'
            self.description_description = 'Simple adder description'
            self.description_type_description

Yikes! I don’t know what it generated for you, but what it made for me was:

class SimpleAdder {
    public:
        class SimpleAdder(object):
            def __init__(self, a, b):
                self.a = a
                self.b = b

            def __call__(self, x):
                return self.a + x

Ew! That’s wrong in a lot of ways. But it’s understandable: Our model can’t really generalize outside of the domain in which it was trained. And so probably there were a few Python files that included syntax of other languages (perhaps generators for other code?). So the model knows that there’s some mysterious syntax that uses curly brackets… But it’s not sure about anything else. (For the programming-language hobbyists among you: The public notation looks to me a lot like the model is trying to do something C-flavored and perhaps something Java-flavored; I like it! But it’s definitely not JavaScript.)

What are the major observations?

  • The syntax it’s generating rapidly and devolves into Python; it can predict only a few characters of non-Python before falling back into its familiar training territory.

  • The part of the code that follows Python syntax is valid and resembles a useful class definition (although if you look closely, it doesn’t seem to do anything useful with the b attribute…). This tells us that the model “understands” its problem domain but hasn’t been trained on the correct data to solve our new problem.

Think 4.2! Using SOTA models#

What are your other observations about the code it generated for you? You’re now aware of how Transformers work.

  1. Think specifically and remark about the observations a machine learning practitioner would make here if your role were to diagnose the error in a production system.

  2. Now, how would a nonexpert user interpret the issues?

  3. Do you think the model-reported confidence for this output would be high, low, or in between…?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Using_SOTA_models_Discussion")

Fine-Tuning#

Alright, so we have a model that can generate code. But now, we want to fine-tune it to generate JavaScript.

Assuming the data will be too large to fit on disk on Colab, we’ll use the load_dataset function to download only part of the dataset. There’s a JavaScript subset to the codeparrot dataset, which we’ll use as an example… But you can use any dataset you like! We recommend filtering datasets by task category (e.g., text generation) to get the most relevant datasets. Still, you can use any dataset you like if you can configure the data loader to use it. (Consider, for example, this one.)

Choose a dataset from the HuggingFace datasets library.

# Unlike _some_ code-generator models on the market, we'll limit our training data by license :)
dataset = load_dataset(
    "codeparrot/github-code",
    streaming=True,
    split="train",
    languages=["JavaScript"],
    licenses=["mit", "isc", "apache-2.0"],
)
# Print the schema of the first example from the training set:
print({k: type(v) for k, v in next(iter(dataset)).items()})
README.md: 100%
 7.54k/7.54k [00:00<00:00, 829kB/s]
github-code.py: 100%
 7.23k/7.23k [00:00<00:00, 915kB/s]
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[39], line 2
      1 # Unlike _some_ code-generator models on the market, we'll limit our training data by license :)
----> 2 dataset = load_dataset(
      3     "codeparrot/github-code",
      4     streaming=True,
      5     split="train",
      6     languages=["JavaScript"],
      7     licenses=["mit", "isc", "apache-2.0"],
      8 )
      9 # Print the schema of the first example from the training set:
     10 print({k: type(v) for k, v in next(iter(dataset)).items()})

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/load.py:2132, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
   2127 verification_mode = VerificationMode(
   2128     (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
   2129 )
   2131 # Create a dataset builder
-> 2132 builder_instance = load_dataset_builder(
   2133     path=path,
   2134     name=name,
   2135     data_dir=data_dir,
   2136     data_files=data_files,
   2137     cache_dir=cache_dir,
   2138     features=features,
   2139     download_config=download_config,
   2140     download_mode=download_mode,
   2141     revision=revision,
   2142     token=token,
   2143     storage_options=storage_options,
   2144     trust_remote_code=trust_remote_code,
   2145     _require_default_config_name=name is None,
   2146     **config_kwargs,
   2147 )
   2149 # Return iterable dataset in case of streaming
   2150 if streaming:

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/load.py:1853, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
   1851     download_config = download_config.copy() if download_config else DownloadConfig()
   1852     download_config.storage_options.update(storage_options)
-> 1853 dataset_module = dataset_module_factory(
   1854     path,
   1855     revision=revision,
   1856     download_config=download_config,
   1857     download_mode=download_mode,
   1858     data_dir=data_dir,
   1859     data_files=data_files,
   1860     cache_dir=cache_dir,
   1861     trust_remote_code=trust_remote_code,
   1862     _require_default_config_name=_require_default_config_name,
   1863     _require_custom_configs=bool(config_kwargs),
   1864 )
   1865 # Get dataset builder class from the processing script
   1866 builder_kwargs = dataset_module.builder_kwargs

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/load.py:1729, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
   1724                 else:
   1725                     raise FileNotFoundError(
   1726                         f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
   1727                         f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
   1728                     ) from None
-> 1729             raise e1 from None
   1730 elif trust_remote_code:
   1731     raise FileNotFoundError(
   1732         f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory."
   1733     )

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/load.py:1672, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
   1670             pass
   1671     # Otherwise we must use the dataset script if the user trusts it
-> 1672     return HubDatasetModuleFactoryWithScript(
   1673         path,
   1674         commit_hash=commit_hash,
   1675         download_config=download_config,
   1676         download_mode=download_mode,
   1677         dynamic_modules_path=dynamic_modules_path,
   1678         trust_remote_code=trust_remote_code,
   1679     ).get_module()
   1680 except EntryNotFoundError:
   1681     # Use the infos from the parquet export except in some cases:
   1682     if data_dir or data_files or (revision and revision != "main"):

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/load.py:1329, in HubDatasetModuleFactoryWithScript.get_module(self)
   1322 importable_file_path = _get_importable_file_path(
   1323     dynamic_modules_path=dynamic_modules_path,
   1324     module_namespace="datasets",
   1325     subdirectory_name=hash,
   1326     name=self.name,
   1327 )
   1328 if not os.path.exists(importable_file_path):
-> 1329     trust_remote_code = resolve_trust_remote_code(self.trust_remote_code, self.name)
   1330     if trust_remote_code:
   1331         _create_importable_file(
   1332             local_path=local_path,
   1333             local_imports=local_imports,
   (...)
   1339             download_mode=self.download_mode,
   1340         )

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/load.py:138, in resolve_trust_remote_code(trust_remote_code, repo_id)
    135         signal.alarm(0)
    136     except Exception:
    137         # OS which does not support signal.SIGALRM
--> 138         raise ValueError(
    139             f"The repository for {repo_id} contains custom code which must be executed to correctly "
    140             f"load the dataset. You can inspect the repository content at https://hf.co/datasets/{repo_id}.\n"
    141             f"Please pass the argument `trust_remote_code=True` to allow custom code to be run."
    142         )
    143 else:
    144     # For the CI which might put the timeout at 0
    145     _raise_timeout_error(None, None)

ValueError: The repository for codeparrot/github-code contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/codeparrot/github-code.
Please pass the argument `trust_remote_code=True` to allow custom code to be run.

Like training any model, we need to define a training loop and an evaluation metric.

This is made overwhelmingly easy with the transformers library. Specifically, look below at all of the code you can avoid using the huggingface infrastructure. (In the past, we’ve used PyTorch Lightning, which had a similar training-loop abstraction. Do you have preferences between these two libraries?)

Implement the code to fine-tune the model#

Here are the big pieces of what we do below:

  • Create a TrainingArguments object. This serializable object (i.e., you can save it to memory or disk) makes it easy to train a model reproducibly with the same hyperparameters (this certainly beats having a bunch of global variables in your notebook!).

  • Encode the dataset. This is effectively just passing everything through the tokenizer, with a padding step that fills the end of each sequence with the padding token.

  • Define our metrics. We use the accuracy metric here (look at the 4th line in the code cell).

  • Create a data collator. This function takes a list of examples and returns a batch of examples. The DataCollatorForLanguageModeling class is a convenient way to do this.

  • Create a Trainer object. This class wraps the training loop and makes it easy to train a model. It’s a bit like the Trainer class in PyTorch Lightning, but it’s a bit more flexible and works with non-PyTorch models as well.

from transformers import TrainingArguments, Trainer, DataCollatorForLanguageModeling
from evaluate import load
metric = load("accuracy")

# Trainer:
training_args = TrainingArguments(
    output_dir="./codeparrot",
    max_steps=100,
    per_device_train_batch_size=1,
)

tokenizer.pad_token = tokenizer.eos_token

encoded_dataset = dataset.map(
    lambda x: tokenizer(x["code"], truncation=True, padding="max_length"),
    batched=True,
    remove_columns=["code"],
)


# Metrics for loss:
def compute_metrics(eval_pred):
  predictions, labels = eval_pred
  predictions = np.argmax(predictions, axis=-1)
  return metric.compute(predictions=predictions, references=labels)


# Data collator:
data_collator = DataCollatorForLanguageModeling(
    tokenizer=tokenizer, mlm=False,
)

# Trainer:
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=encoded_dataset,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics,
    data_collator=data_collator,
)
Downloading builder script: 100%
 4.20k/4.20k [00:00<00:00, 588kB/s]
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[40], line 14
      6 training_args = TrainingArguments(
      7     output_dir="./codeparrot",
      8     max_steps=100,
      9     per_device_train_batch_size=1,
     10 )
     12 tokenizer.pad_token = tokenizer.eos_token
---> 14 encoded_dataset = dataset.map(
     15     lambda x: tokenizer(x["code"], truncation=True, padding="max_length"),
     16     batched=True,
     17     remove_columns=["code"],
     18 )
     21 # Metrics for loss:
     22 def compute_metrics(eval_pred):

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/arrow_dataset.py:560, in transmit_format.<locals>.wrapper(*args, **kwargs)
    553 self_format = {
    554     "type": self._format_type,
    555     "format_kwargs": self._format_kwargs,
    556     "columns": self._format_columns,
    557     "output_all_columns": self._output_all_columns,
    558 }
    559 # apply actual function
--> 560 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
    561 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
    562 # re-apply format to the output

File /opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/datasets/arrow_dataset.py:2976, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
   2974     missing_columns = set(remove_columns) - set(self._data.column_names)
   2975     if missing_columns:
-> 2976         raise ValueError(
   2977             f"Column to remove {list(missing_columns)} not in the dataset. Current columns in the dataset: {self._data.column_names}"
   2978         )
   2980 load_from_cache_file = load_from_cache_file if load_from_cache_file is not None else is_caching_enabled()
   2982 if fn_kwargs is None:

ValueError: Column to remove ['code'] not in the dataset. Current columns in the dataset: ['text']
# Run the actual training:
trainer.train()

Coding Exercise 4.1: Implement the code to generate text after fine-tuning.#

To generate text, we provide input tokens to the model, let it generate the next token and append it into the input tokens. Now, keep repeating this process until you reach the desired output length.

# Number of tokens to generate
num_tokens = 100

# Move the model to the CPU for inference
model.to("cpu")

# Print input prompt
print(f'Input prompt: \n{input_prompt}')

#################################################
# Implement a the correct tokens and outputs
raise NotImplementedError("Text Generation")
#################################################

# Encode the input prompt
# https://huggingface.co/docs/transformers/en/main_classes/tokenizer
input_tokens = ...

# Turn off storing gradients
with torch.no_grad():
  # Keep iterating until num_tokens are generated
  for tkn_idx in tqdm(range(num_tokens)):
    # Forward pass through the model
    # The model expects the tensor to be of Long or Int dtype
    output = ...
    # Get output logits
    logits = output.logits[-1, :]
    # Convert into probabilities
    probs = nn.functional.softmax(logits, dim=-1)
    # Get the index of top token
    top_token = ...
    # Append the token into the input sequence
    input_tokens.append(top_token)

# Decode and print the generated text
# https://huggingface.co/docs/transformers/en/main_classes/tokenizer
decoded_text = ...
print(f'Generated text: \n{decoded_text}')

Click for solution

We can also directly generate text using the generation_pipeline:

# Move the model to the CPU for inference
model.to("cpu")
print(
    generation_pipeline(
        input_prompt, max_length=100, num_return_sequences=1, temperature=0.2
    )[0]["generated_text"]
)
class SimpleAdder {
    public:
        def __init__(self, name, args, kwargs):
            self.name = name
            self.args = args
            self.kwargs = kwargs
        def __call__(self, *args, **kwargs):
            return self.args + self.kwargs
        def __repr__(self):
            return "<SimpleAdder %s>" % self.name

class SimpleAdder2 {
    public:
        def __init__(self, name

Of course, your results will be slightly different. Here’s what I got:

class SimpleAdder {
    constructor(a, b) {
        this.a = a;
        this.b = b;
    }

    add(

Much better! The model is no longer generating Python code, and it’s not trying to jam Python-flavored syntax into other languages. It’s still imperfect, but it’s much better than before! (And, of course, remember that this is just a small model, and we didn’t train it for very long. You can either try training it for longer or using a larger model to get better results.)

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_FineTune_the_model_Exercise")

Think 4.3! Accuracy metric observations#

Why might accuracy be a bad metric for this task?

Hint: What does it mean to be “accurate” in this task?

Click for solution

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Accuracy_metric_observations_Discussion")

Section 5: GPT Today and Tomorrow#

Limitation of the current models.

Video 5: Conclusion#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Conclusion_Video")

Play around with LLMs#

  1. Try using LLMs’ API to do tasks, such as utilizing the GPT-2 API to extend text from a provided context. To achieve this, ensure you have a HuggingFace account and secure an API token.

import requests

def query(payload, model_id, api_token):
  headers = {"Authorization": f"Bearer {api_token}"}
  API_URL = f"https://api-inference.huggingface.co/models/{model_id}"
  response = requests.post(API_URL, headers=headers, json=payload)
  return response.json()\

model_id = "gpt2"
api_token = "hf_****" # get yours at hf.co/settings/tokens
data = query("The goal of life is", model_id, api_token)
print(data)
{'error': 'Authorization header is correct, but the token seems invalid'}
  1. Try the following questions with ChatGPT (GPT3.5 without access to the web) and with GPTBing in creative mode (GPT4 with access to the web). Note that the latter requires installing Microsoft Edge.

Pick someone you know who is likely to have a web presence but is not super famous (not Musk or Trump). Ask GPT for a two-paragraph biography. How good is it?

Ask it something like “What is the US, UK, Germany, China, and Japan’s per capita income over the past ten years? Plot the data in a single figure” (depending on when and where you run this, you will need to paste the resulting Python code into a colab notebook). Try asking it questions about the data or the definition of “per capita income” used. How good is it?

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_Play_around_with_LLMs_Activity")

Summary#

In this tutorial you have become familiar with modern natural language processing (NLP) architectures. We learned about the core concepts, functionalities, and applications of these architectures. We also gain insights into prompt engineering and we learned about GPT.


Daily survey#

Don’t forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there is a small delay before you will be redirected to the survey.

button link to survey


Bonus Section: Using Large Language Models (LLMs)#

This videos tells you what large language models are being used for now and how you can use them. For instance, personalized tutoring, language practice, improving writing, exam preparation, writing help and data science.

Video 6: Using GPT#

Submit your feedback#

Hide code cell source
# @title Submit your feedback
content_review(f"{feedback_prefix}_What_models_Video")