Open In Colab   Open in Kaggle

Example Deep Learning Project

By Neuromatch Academy

Content creators: Marius ‘t Hart, Megan Peters, Vladimir Haltakov, Paul Schrater, Gunnar Blohm

Production editor: Spiros Chavlis

Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs


Objectives

We’re interested in automatically classifying movement. There is a great dataset (MoVi) with different modalities of movement recordings (videos, visual markers, accelerometers, skeletal motion reconstructions, etc). We will use a sub-set of this data, i.e. estimated skeletal motion, to perform a pilot study investigating whether we can classify different movements from the skeletal motion. And if so, which skeletal motions (if not all) are neccessary for good decoding performance?

Please check out the different resources below to better understand the MoVi dataset and learn more about the movements.

Resources:


Setup

For your own project, you can put together a colab notebook by copy-pasting bits of code from the tutorials. We still recommend keeping the 4 setup cells at the top, like here; Imports, Figure Settings, Plotting functions, and Data retrieval.

# Imports
# get some matrices and plotting:
import numpy as np
import matplotlib.pyplot as plt

# get some pytorch:
import torch
import torch.nn as nn
from torch.nn import MaxPool1d
from torch.utils.data import Dataset
from torch.utils.data import DataLoader

# confusion matrix from sklearn
from sklearn.metrics import confusion_matrix

# to get some idea of how long stuff will take to complete:
import time

# to see how unbalanced the data is:
from collections import Counter

Figure settings

# @title Figure settings
import ipywidgets as widgets #interactive display

%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")

Plotting functions

# @title Plotting functions

def plotConfusionMatrix(real_labels, predicted_labels, label_names):

  # conver the labels to integers:
  real_labels = [int(x) for x in real_labels]
  predicted_labels = [int(x) for x in predicted_labels]
  tick_names = [a.replace("_", " ") for a in label_names]

  cm = confusion_matrix(real_labels, predicted_labels, normalize='true')

  fig = plt.figure(figsize=(8,6))
  plt.imshow(cm)
  plt.xticks(range(len(tick_names)),tick_names, rotation=90)
  plt.yticks(range(len(tick_names)),tick_names)
  plt.xlabel('predicted move')
  plt.ylabel('real move')
  plt.show()

Data retrieval

Run this cell to download the data for this example project.

# @title Data retrieval
# @markdown Run this cell to download the data for this example project.
import io
import requests
r = requests.get('https://osf.io/mnqb7/download')
if r.status_code != 200:
  print('Failed to download data')
else:
  train_moves=np.load(io.BytesIO(r.content), allow_pickle=True)['train_moves']
  train_labels=np.load(io.BytesIO(r.content), allow_pickle=True)['train_labels']
  test_moves=np.load(io.BytesIO(r.content), allow_pickle=True)['test_moves']
  test_labels=np.load(io.BytesIO(r.content), allow_pickle=True)['test_labels']
  label_names=np.load(io.BytesIO(r.content), allow_pickle=True)['label_names']
  joint_names=np.load(io.BytesIO(r.content), allow_pickle=True)['joint_names']

Step 1: Question

There are many different questions we could ask with the MoVi dataset. We will start with a simple question: “Can we classify movements from skeletal motion data, and if so, which body parts are the most informative ones?”

Our goal is to perform a pilot study to see if this is possible in principle. We will therefore use “ground truth” skeletal motion data that has been computed using an inference algorithm (see MoVi paper). If this works out, then as a next step we might want to use the raw sensor data or even videos…

The ultimate goal could for example be to figure out which body parts to record movements from (e.g. is just a wristband enough?) to classify movement.


Step 2: literature review

Most importantly, our literature review needs to address the following:

  • what modeling approaches make it possible to classify time series data?

  • how is human motion captured?

  • what exactly is in the MoVi dataset?

  • what is known regarding classification of human movement based on different measurements?

What we learn from the literature review is too long to write out here… But we would like to point out that human motion classification has been done; we’re not proposing a very novel project here. But that’s ok for an NMA project!


Step 3: ingredients

Data ingredients

After downloading the data, we should have 6 numpy arrays:

  • train_moves: the training set of 1032 movements

  • train_labels: the class labels for each of the 1032 training movements

  • test_moves: the test set of 172 movements

  • test_labels: the class labels for each of the 172 test movements

  • label_names: text labels for the values in the two arrays of class labels

  • joint_names: the names of the 24 joints used in each movement

We’ll take a closer look at the data below. Note: data is split into training and test sets. If you don’t know what that means, NMA-DL will teach you!

Inputs:

For simplicity, we take the first 24 joints of the whole MoVi dataset including all major limbs. The data was in an exponential map format, which has 3 rotations/angles for each joint (pitch, yaw, roll). The advantage of this type of data is that it is (mostly) agnostic about body size or shape. And since we care about movements only, we choose this representation of the data (there are other representations in the full data set).

Since the joints are simply points, the 3rd angle (i.e. roll) contained no information, and that is already dropped from the data that we pre-formatted for this demo project. That is, the movements of each joint are described by 2 angles, that change over time. Furthermore, we normalized all the angles/rotations to fall between 0 and 1 so they are good input for PyTorch.

Finally, the movements originally took various amounts of time, but we need the same input for each movement, so we sub-sampled and (linearly) interpolated the data to have 75 timepoints.

Our training data is supposed to have 1032 movements, 2 x 24 joints = 48 channels and 75 timepoints. Let’s check and make sure:

print(train_moves.shape)
(1032, 48, 75)

Cool!

Joints:

For each movement we have 2 angles from 24 joints. Which joints are these?

for joint_no in range(24):
  print(f"{joint_no}: {joint_names[joint_no]}")
0: Pelvis
1: LeftHip
2: RightHip
3: Spine1
4: LeftKnee
5: RightKnee
6: Spine2
7: LeftAnkle
8: RightAnkle
9: Spine3
10: LeftFoot
11: RightFoot
12: Neck
13: LeftCollar
14: RightCollar
15: Head
16: LeftShoulder
17: RightShoulder
18: LeftElbow
19: RightElbow
20: LeftWrist
21: RightWrist
22: LeftHand
23: RightHand

Labels:

Let’s have a look at the train_labels array too:

print(train_labels)
print(train_labels.shape)
[ 0  1  4 ...  6  2 11]
(1032,)

The labels are numbers, and there are 1032 of them, so that matches the number of movements in the data set. There are text versions too in the array called label_names. Let’s have a look. There are supposed to be 14 movement classes.

# let's check the values of the train_labels array:
label_numbers = np.unique(train_labels)
print(label_numbers)

# and use them as indices into the label_names array:
for label_no in label_numbers:
  print(f"{label_no}: {label_names[label_no]}")
[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13]
0: crawling
1: throw/catch
2: walking
3: running_in_spot
4: cross_legged_sitting
5: hand_clapping
6: scratching_head
7: kicking
8: phone_talking
9: sitting_down
10: checking_watch
11: pointing
12: hand_waving
13: taking_photo

The test data set has similar data, but fewer movements. That’s ok. What’s important is that both the training and test datasets have an even spread of movement types, i.e. we want them to be balanced. Let’s see how balanced the data is:

Counter(train_labels)
Counter({0: 74,
         1: 74,
         4: 73,
         5: 73,
         6: 74,
         7: 74,
         8: 74,
         9: 74,
         10: 74,
         11: 74,
         12: 74,
         13: 74,
         3: 73,
         2: 73})
Counter(test_labels)
Counter({2: 13,
         3: 13,
         5: 13,
         4: 13,
         6: 12,
         7: 12,
         8: 12,
         9: 12,
         11: 12,
         10: 12,
         12: 12,
         13: 12,
         1: 12,
         0: 12})

So that looks more or less OK. Movements 2, 3, 4 and 5 occur once more in the training data than the other movements, and one time fewer in the test data. Not perfect, but probably doesn’t matter that much.

Model ingredients

“Mechanisms”:

  • Feature engineering? –> Do we need anything else aside from angular time courses? For now we choose to only use the angular time courses (exponential maps), as our ultimate goal is to see how many joints we need for accurate movement classification so that we can decrease the number of measurements or devices for later work.

  • Feature selection? –> Which joint movements are most informative? These are related to our research questions and hypotheses, so this project will explicitly investigate which joints are most informative.

  • Feature grouping? –> Instead of trying all possible combinations of joints (very many) we could focus on limbs, by grouping joints. We could also try the model on individual joints.

  • Classifier? –> For our classifier we would like to keep it as simple as possible, but we will decide later.

  • Input? –> The training data (movements and labels) will be used to train the classifier.

  • Output? –> The test data will be used as input for the trained model and we will see if the predicted labels are the same as the actual labels.


Step 4: hypotheses

Since humans can easily distinguish different movement types from video data and also more abstract “stick figures”, a DL model should also be able to do so. Therefore, our hypotheses are more detailed with respect to parameters influencing model performance (and not just whether it will work or not).

Remember, we’re interested in seeing how many joints are needed for classification. So we could hypothezise (Hypothesis 1) that arm and leg motions are sufficient for classification (meaning: head and torso data is not needed).

  • Hypothesis 1: The performance of a model with four limbs plus torso and head is not higher than the performance of a model with only limbs.

We could also hypothesize that data from only one side of the body is sufficient (Hypothesis 2), e.g. the right side, since our participants are right handed.

  • Hypothesis 2: A model using only joints in the right arm will outperform a model using only the joints in the left arm.

Writing those in mathematical terms:

  • Hypothesis 1: \(\mathbb{E}(perf_{limbs})>\mathbb{E}(perf_{torso})\)

  • Hypothesis 2: \(\mathbb{E}(perf_{right arm})>\mathbb{E}(perf_{left arm})\)


Step 5: toolkit selection

We need a toolkit that can deal with time-varying data as input (e.g. 1d convnet, LSTM, transformer…). We want to keep it as simple as possible to start with. So let’s run with a 1d convnet. It allows us to answer our question, it will be able to speak to our hypotheses, and hopefully we can achieve our goal to see if automatic movement classification based on (sparse) body movement data is possible.


Step 6: model drafting

Here is our sketch of the model we wanted to build…


Step 7: model implementation

It’s finally time to write some deep learning code… so here we go!

The cell below creates an object class, and is based on https://pytorch.org/tutorials/beginner/basics/data_tutorial.html on the PyTorch website, adapted to work with our data.

It is based on the Dataset object class in PyTorch and this is needed to set up a Dataloader object that will be used in the model.

We can tell our dataset object to use the training or test data. We can also tell it which joints to return, so that we can build models that classify movements based on different sets of joints:

class MoViJointDataset(Dataset):
  """MoVi dataset."""

  def __init__(self, train=True, joints=list(range(24))):
    """
    Args:
      train (boolean): Use the training data, or otherwise the test data.
      joints (list): Indices of joints to return.
    """

    # select the training or test data:
    if train:
      self.moves = train_moves
      self.labels = train_labels
    else:
      self.moves = test_moves
      self.labels = test_labels

    # convert joint indices to channel indices:
    joints = np.array(joints)
    self.channels = np.sort(list(joints*2)+ list((joints*2)+1)) # 2 channels per joint

  def __len__(self):
    return self.labels.size

  def __getitem__(self, idx):
    if torch.is_tensor(idx):
      idx = idx.tolist()

    sample = (np.float32(np.squeeze(self.moves[idx,self.channels,:])), self.labels[idx])

    return sample

We want to make sure that this object works the way we intended, so we try it out:

# Create training and test datasets
movi_train = MoViJointDataset(train=True)
movi_test = MoViJointDataset(train=False, joints=[0,1,2])

print('TRAINING:')
for idx in range(len(movi_train)):
  pass
print(idx)
print(movi_train[idx][0].shape, label_names[movi_train[idx][1]])
print('\nTESTING:')
for idx in range(len(movi_test)):
  pass
print(idx)
print(movi_test[idx][0].shape, label_names[movi_test[idx][1]])
TRAINING:
1031
(48, 75) pointing

TESTING:
171
(6, 75) cross_legged_sitting

So we see the movement number (minus 1), the shape of the dataset (e.g. 48 channels and 75 time points for the training set), and the name of the movement.

Build model

pytorch expects as input not a single sample, but rather a minibatch of B samples stacked together along the “minibatch dimension”. So a “1D” CNN in pytorch expects a 3D tensor as input: BxCxT

  • B: batch size (however many examples are used in batch training)

  • C: channels (up to 24 joints x 2 coordinates)

  • T: timepoints (75 in our case)

We need Dataloader objects that use our MoViJointDataset objects to do this. For this we can simply use PyTorch Dataloader objects, but it also needs one of our hyperparameters (batch size) to be set:

# Hyperparameters
num_epochs = 500
num_classes = 14 # is this ever used?
batch_size = 516
learning_rate = 0.001

# Create training and test datasets
movi_train = MoViJointDataset(train = True)
movi_test  = MoViJointDataset(train = False)

# Data loader
train_loader = DataLoader(dataset=movi_train, batch_size=batch_size, shuffle=True)
test_loader  = DataLoader(dataset=movi_test, batch_size=batch_size, shuffle=False)

And we decided to use a simple 1D convnet. We want to specify the number of joints used and then use 2 input channels for every joint (2 dimensions of rotation). At the end of the convnet there are 14 probabilities, 1 for each class of movement, but we convert it to give the index of the highest probability.

class Mov1DCNN(nn.Module):
  def __init__(self, njoints=24):

    super(Mov1DCNN, self).__init__()

    self.layer1 = nn.Sequential(
      nn.Conv1d(in_channels=njoints*2, out_channels=56, kernel_size=5, stride=2),
      nn.ReLU(),
      nn.MaxPool1d(kernel_size=2, stride=2))

    self.layer2 = nn.Sequential(
      nn.Conv1d(in_channels=56, out_channels=14, kernel_size=1),
      nn.ReLU(),
      nn.MaxPool1d(kernel_size=2, stride=2))

    self.dropout1 = nn.Dropout(p=0.2)
    self.fc1 = nn.Linear(126, 2200)  # fix dimensions
    self.nl = nn.ReLU()
    self.dropout2 = nn.Dropout(p=0.2)
    self.fc2 = nn.Linear(2200, 14)

  def forward(self, x):
    out = self.layer1(x)
    out = self.layer2(out)

    out = out.reshape(out.size(0), -1)
    out = self.dropout1(out)
    out = self.fc1(out)
    out = self.nl(out)
    out = self.dropout2(out)
    out = self.fc2(out)
    # pick the most likely class:
    out = nn.functional.log_softmax(out, dim=1)

    return out

We can now instantiate the model object, with all joints, and set a criterion and optimizer:

### ADDING GPU ###
device = "cuda" if torch.cuda.is_available() else "cpu"

# create the model object:
model = Mov1DCNN(njoints=24).to(device)

# loss and optimizer:
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

And now we are ready to train this model.

This takes up to ~20 seconds!

# Train the model
total_step = len(train_loader)
loss_list = []
acc_list = []
for epoch in range(num_epochs):
  for i, (motions, labels) in enumerate(train_loader):
    motions, labels = motions.to(device), labels.to(device)

    # Run the forward pass
    outputs = model(motions)
    loss = criterion(outputs, labels)
    loss_list.append(loss.item())

    # Backprop and perform Adam optimisation
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    # Track the accuracy
    total = labels.size(0)
    _, predicted = torch.max(outputs.data, 1)
    correct = (predicted == labels).sum().item()
    acc_list.append(correct / total)

    if (i + 1) % 2 == 0:
        print(f"Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{total_step}], "
              f"Loss: {loss.item():.4f}, "
              f"Accuracy: {((correct / total) * 100):.2f}%")
Epoch [1/500], Step [2/2], Loss: 2.6480, Accuracy: 5.43%
Epoch [2/500], Step [2/2], Loss: 2.6406, Accuracy: 6.78%
Epoch [3/500], Step [2/2], Loss: 2.6367, Accuracy: 9.69%
Epoch [4/500], Step [2/2], Loss: 2.6323, Accuracy: 10.08%
Epoch [5/500], Step [2/2], Loss: 2.6326, Accuracy: 11.05%
Epoch [6/500], Step [2/2], Loss: 2.6314, Accuracy: 7.17%
Epoch [7/500], Step [2/2], Loss: 2.6168, Accuracy: 11.82%
Epoch [8/500], Step [2/2], Loss: 2.6110, Accuracy: 13.57%
Epoch [9/500], Step [2/2], Loss: 2.6021, Accuracy: 11.82%
Epoch [10/500], Step [2/2], Loss: 2.5813, Accuracy: 24.81%
Epoch [11/500], Step [2/2], Loss: 2.5629, Accuracy: 16.86%
Epoch [12/500], Step [2/2], Loss: 2.5371, Accuracy: 16.86%
Epoch [13/500], Step [2/2], Loss: 2.4985, Accuracy: 22.29%
Epoch [14/500], Step [2/2], Loss: 2.4681, Accuracy: 22.09%
Epoch [15/500], Step [2/2], Loss: 2.3990, Accuracy: 23.26%
Epoch [16/500], Step [2/2], Loss: 2.4158, Accuracy: 14.73%
Epoch [17/500], Step [2/2], Loss: 2.3237, Accuracy: 20.93%
Epoch [18/500], Step [2/2], Loss: 2.2643, Accuracy: 28.10%
Epoch [19/500], Step [2/2], Loss: 2.2219, Accuracy: 24.81%
Epoch [20/500], Step [2/2], Loss: 2.2181, Accuracy: 19.57%
Epoch [21/500], Step [2/2], Loss: 2.1003, Accuracy: 33.14%
Epoch [22/500], Step [2/2], Loss: 2.0728, Accuracy: 36.82%
Epoch [23/500], Step [2/2], Loss: 2.0181, Accuracy: 33.33%
Epoch [24/500], Step [2/2], Loss: 1.9636, Accuracy: 38.57%
Epoch [25/500], Step [2/2], Loss: 1.9453, Accuracy: 38.37%
Epoch [26/500], Step [2/2], Loss: 1.9323, Accuracy: 41.67%
Epoch [27/500], Step [2/2], Loss: 2.1525, Accuracy: 15.50%
Epoch [28/500], Step [2/2], Loss: 2.2903, Accuracy: 16.67%
Epoch [29/500], Step [2/2], Loss: 2.0728, Accuracy: 23.06%
Epoch [30/500], Step [2/2], Loss: 1.9498, Accuracy: 27.33%
Epoch [31/500], Step [2/2], Loss: 1.8789, Accuracy: 36.82%
Epoch [32/500], Step [2/2], Loss: 1.7673, Accuracy: 42.05%
Epoch [33/500], Step [2/2], Loss: 1.7470, Accuracy: 47.09%
Epoch [34/500], Step [2/2], Loss: 1.6760, Accuracy: 50.00%
Epoch [35/500], Step [2/2], Loss: 1.6755, Accuracy: 48.45%
Epoch [36/500], Step [2/2], Loss: 1.6765, Accuracy: 51.74%
Epoch [37/500], Step [2/2], Loss: 1.6511, Accuracy: 50.78%
Epoch [38/500], Step [2/2], Loss: 1.6547, Accuracy: 49.03%
Epoch [39/500], Step [2/2], Loss: 1.5766, Accuracy: 54.46%
Epoch [40/500], Step [2/2], Loss: 1.6296, Accuracy: 49.22%
Epoch [41/500], Step [2/2], Loss: 1.5729, Accuracy: 48.64%
Epoch [42/500], Step [2/2], Loss: 1.5767, Accuracy: 47.67%
Epoch [43/500], Step [2/2], Loss: 1.5290, Accuracy: 53.49%
Epoch [44/500], Step [2/2], Loss: 1.5140, Accuracy: 56.20%
Epoch [45/500], Step [2/2], Loss: 1.5012, Accuracy: 52.91%
Epoch [46/500], Step [2/2], Loss: 1.4518, Accuracy: 50.78%
Epoch [47/500], Step [2/2], Loss: 1.3860, Accuracy: 58.33%
Epoch [48/500], Step [2/2], Loss: 1.5052, Accuracy: 48.26%
Epoch [49/500], Step [2/2], Loss: 1.4304, Accuracy: 47.09%
Epoch [50/500], Step [2/2], Loss: 1.4462, Accuracy: 50.97%
Epoch [51/500], Step [2/2], Loss: 1.3482, Accuracy: 57.56%
Epoch [52/500], Step [2/2], Loss: 1.4421, Accuracy: 51.94%
Epoch [53/500], Step [2/2], Loss: 1.4344, Accuracy: 50.00%
Epoch [54/500], Step [2/2], Loss: 1.3641, Accuracy: 56.01%
Epoch [55/500], Step [2/2], Loss: 1.3910, Accuracy: 53.10%
Epoch [56/500], Step [2/2], Loss: 1.3702, Accuracy: 52.13%
Epoch [57/500], Step [2/2], Loss: 1.2592, Accuracy: 61.05%
Epoch [58/500], Step [2/2], Loss: 1.4160, Accuracy: 49.22%
Epoch [59/500], Step [2/2], Loss: 1.3207, Accuracy: 56.78%
Epoch [60/500], Step [2/2], Loss: 1.2412, Accuracy: 58.72%
Epoch [61/500], Step [2/2], Loss: 1.2522, Accuracy: 59.11%
Epoch [62/500], Step [2/2], Loss: 1.2712, Accuracy: 55.62%
Epoch [63/500], Step [2/2], Loss: 1.2283, Accuracy: 62.60%
Epoch [64/500], Step [2/2], Loss: 1.4018, Accuracy: 48.26%
Epoch [65/500], Step [2/2], Loss: 1.3962, Accuracy: 50.39%
Epoch [66/500], Step [2/2], Loss: 1.3044, Accuracy: 53.88%
Epoch [67/500], Step [2/2], Loss: 1.1994, Accuracy: 58.14%
Epoch [68/500], Step [2/2], Loss: 1.2737, Accuracy: 56.59%
Epoch [69/500], Step [2/2], Loss: 1.1808, Accuracy: 63.95%
Epoch [70/500], Step [2/2], Loss: 1.1857, Accuracy: 59.30%
Epoch [71/500], Step [2/2], Loss: 1.1621, Accuracy: 62.40%
Epoch [72/500], Step [2/2], Loss: 1.1327, Accuracy: 64.73%
Epoch [73/500], Step [2/2], Loss: 1.2124, Accuracy: 57.75%
Epoch [74/500], Step [2/2], Loss: 1.1848, Accuracy: 56.98%
Epoch [75/500], Step [2/2], Loss: 1.1858, Accuracy: 59.50%
Epoch [76/500], Step [2/2], Loss: 1.1764, Accuracy: 59.69%
Epoch [77/500], Step [2/2], Loss: 1.2241, Accuracy: 56.01%
Epoch [78/500], Step [2/2], Loss: 1.2070, Accuracy: 56.78%
Epoch [79/500], Step [2/2], Loss: 1.1346, Accuracy: 62.60%
Epoch [80/500], Step [2/2], Loss: 1.0881, Accuracy: 62.79%
Epoch [81/500], Step [2/2], Loss: 1.0764, Accuracy: 63.95%
Epoch [82/500], Step [2/2], Loss: 1.0618, Accuracy: 63.57%
Epoch [83/500], Step [2/2], Loss: 1.1234, Accuracy: 59.69%
Epoch [84/500], Step [2/2], Loss: 1.0752, Accuracy: 65.12%
Epoch [85/500], Step [2/2], Loss: 0.9835, Accuracy: 64.92%
Epoch [86/500], Step [2/2], Loss: 1.0583, Accuracy: 61.63%
Epoch [87/500], Step [2/2], Loss: 1.0892, Accuracy: 63.76%
Epoch [88/500], Step [2/2], Loss: 1.0511, Accuracy: 61.24%
Epoch [89/500], Step [2/2], Loss: 1.0785, Accuracy: 60.66%
Epoch [90/500], Step [2/2], Loss: 1.0895, Accuracy: 65.50%
Epoch [91/500], Step [2/2], Loss: 1.0642, Accuracy: 61.24%
Epoch [92/500], Step [2/2], Loss: 1.1050, Accuracy: 57.95%
Epoch [93/500], Step [2/2], Loss: 1.0463, Accuracy: 64.53%
Epoch [94/500], Step [2/2], Loss: 0.9804, Accuracy: 65.89%
Epoch [95/500], Step [2/2], Loss: 1.1098, Accuracy: 61.63%
Epoch [96/500], Step [2/2], Loss: 1.0019, Accuracy: 66.28%
Epoch [97/500], Step [2/2], Loss: 0.9884, Accuracy: 65.12%
Epoch [98/500], Step [2/2], Loss: 1.0109, Accuracy: 65.89%
Epoch [99/500], Step [2/2], Loss: 1.0032, Accuracy: 64.53%
Epoch [100/500], Step [2/2], Loss: 0.9925, Accuracy: 69.19%
Epoch [101/500], Step [2/2], Loss: 0.9800, Accuracy: 68.41%
Epoch [102/500], Step [2/2], Loss: 1.0144, Accuracy: 67.44%
Epoch [103/500], Step [2/2], Loss: 0.9700, Accuracy: 66.47%
Epoch [104/500], Step [2/2], Loss: 1.0198, Accuracy: 65.70%
Epoch [105/500], Step [2/2], Loss: 0.9660, Accuracy: 65.31%
Epoch [106/500], Step [2/2], Loss: 1.0273, Accuracy: 65.12%
Epoch [107/500], Step [2/2], Loss: 0.9545, Accuracy: 67.25%
Epoch [108/500], Step [2/2], Loss: 0.9856, Accuracy: 64.73%
Epoch [109/500], Step [2/2], Loss: 0.9455, Accuracy: 66.67%
Epoch [110/500], Step [2/2], Loss: 0.9822, Accuracy: 65.70%
Epoch [111/500], Step [2/2], Loss: 0.9607, Accuracy: 65.89%
Epoch [112/500], Step [2/2], Loss: 0.9973, Accuracy: 62.60%
Epoch [113/500], Step [2/2], Loss: 0.9855, Accuracy: 64.34%
Epoch [114/500], Step [2/2], Loss: 0.9560, Accuracy: 65.89%
Epoch [115/500], Step [2/2], Loss: 1.0016, Accuracy: 65.70%
Epoch [116/500], Step [2/2], Loss: 0.9764, Accuracy: 66.67%
Epoch [117/500], Step [2/2], Loss: 1.0549, Accuracy: 60.27%
Epoch [118/500], Step [2/2], Loss: 1.0116, Accuracy: 62.98%
Epoch [119/500], Step [2/2], Loss: 0.9762, Accuracy: 63.76%
Epoch [120/500], Step [2/2], Loss: 0.9199, Accuracy: 69.57%
Epoch [121/500], Step [2/2], Loss: 0.9105, Accuracy: 69.57%
Epoch [122/500], Step [2/2], Loss: 0.9481, Accuracy: 66.47%
Epoch [123/500], Step [2/2], Loss: 0.8978, Accuracy: 70.54%
Epoch [124/500], Step [2/2], Loss: 0.9703, Accuracy: 64.53%
Epoch [125/500], Step [2/2], Loss: 0.8865, Accuracy: 69.38%
Epoch [126/500], Step [2/2], Loss: 0.8936, Accuracy: 68.99%
Epoch [127/500], Step [2/2], Loss: 0.8813, Accuracy: 70.74%
Epoch [128/500], Step [2/2], Loss: 0.9221, Accuracy: 66.67%
Epoch [129/500], Step [2/2], Loss: 0.8629, Accuracy: 68.22%
Epoch [130/500], Step [2/2], Loss: 0.8961, Accuracy: 68.41%
Epoch [131/500], Step [2/2], Loss: 0.9542, Accuracy: 66.09%
Epoch [132/500], Step [2/2], Loss: 0.8764, Accuracy: 67.83%
Epoch [133/500], Step [2/2], Loss: 0.8833, Accuracy: 70.54%
Epoch [134/500], Step [2/2], Loss: 0.9093, Accuracy: 68.99%
Epoch [135/500], Step [2/2], Loss: 0.8564, Accuracy: 69.57%
Epoch [136/500], Step [2/2], Loss: 0.9001, Accuracy: 68.22%
Epoch [137/500], Step [2/2], Loss: 0.9289, Accuracy: 67.05%
Epoch [138/500], Step [2/2], Loss: 0.8610, Accuracy: 68.99%
Epoch [139/500], Step [2/2], Loss: 0.8460, Accuracy: 69.38%
Epoch [140/500], Step [2/2], Loss: 0.9108, Accuracy: 66.67%
Epoch [141/500], Step [2/2], Loss: 0.9232, Accuracy: 62.02%
Epoch [142/500], Step [2/2], Loss: 0.9062, Accuracy: 68.41%
Epoch [143/500], Step [2/2], Loss: 0.9208, Accuracy: 65.70%
Epoch [144/500], Step [2/2], Loss: 0.8209, Accuracy: 69.57%
Epoch [145/500], Step [2/2], Loss: 1.0096, Accuracy: 64.15%
Epoch [146/500], Step [2/2], Loss: 0.9415, Accuracy: 63.37%
Epoch [147/500], Step [2/2], Loss: 0.8265, Accuracy: 70.35%
Epoch [148/500], Step [2/2], Loss: 0.8820, Accuracy: 67.83%
Epoch [149/500], Step [2/2], Loss: 0.8374, Accuracy: 69.57%
Epoch [150/500], Step [2/2], Loss: 0.8788, Accuracy: 68.80%
Epoch [151/500], Step [2/2], Loss: 0.8509, Accuracy: 68.22%
Epoch [152/500], Step [2/2], Loss: 0.8474, Accuracy: 68.99%
Epoch [153/500], Step [2/2], Loss: 0.8532, Accuracy: 68.41%
Epoch [154/500], Step [2/2], Loss: 0.9183, Accuracy: 65.70%
Epoch [155/500], Step [2/2], Loss: 0.8343, Accuracy: 68.99%
Epoch [156/500], Step [2/2], Loss: 0.8329, Accuracy: 70.93%
Epoch [157/500], Step [2/2], Loss: 0.8276, Accuracy: 69.57%
Epoch [158/500], Step [2/2], Loss: 0.8784, Accuracy: 67.64%
Epoch [159/500], Step [2/2], Loss: 0.8516, Accuracy: 68.99%
Epoch [160/500], Step [2/2], Loss: 0.8501, Accuracy: 67.64%
Epoch [161/500], Step [2/2], Loss: 0.8175, Accuracy: 70.54%
Epoch [162/500], Step [2/2], Loss: 0.8893, Accuracy: 66.09%
Epoch [163/500], Step [2/2], Loss: 0.8007, Accuracy: 71.12%
Epoch [164/500], Step [2/2], Loss: 0.8024, Accuracy: 71.71%
Epoch [165/500], Step [2/2], Loss: 0.8952, Accuracy: 65.70%
Epoch [166/500], Step [2/2], Loss: 0.8150, Accuracy: 70.16%
Epoch [167/500], Step [2/2], Loss: 0.8346, Accuracy: 71.71%
Epoch [168/500], Step [2/2], Loss: 0.7924, Accuracy: 72.29%
Epoch [169/500], Step [2/2], Loss: 0.8469, Accuracy: 68.80%
Epoch [170/500], Step [2/2], Loss: 0.7887, Accuracy: 71.90%
Epoch [171/500], Step [2/2], Loss: 0.7817, Accuracy: 72.67%
Epoch [172/500], Step [2/2], Loss: 0.8664, Accuracy: 68.22%
Epoch [173/500], Step [2/2], Loss: 0.8933, Accuracy: 66.86%
Epoch [174/500], Step [2/2], Loss: 0.8458, Accuracy: 68.02%
Epoch [175/500], Step [2/2], Loss: 0.7665, Accuracy: 70.74%
Epoch [176/500], Step [2/2], Loss: 0.8106, Accuracy: 71.71%
Epoch [177/500], Step [2/2], Loss: 0.7843, Accuracy: 72.87%
Epoch [178/500], Step [2/2], Loss: 0.8406, Accuracy: 69.57%
Epoch [179/500], Step [2/2], Loss: 0.8736, Accuracy: 68.22%
Epoch [180/500], Step [2/2], Loss: 0.8259, Accuracy: 69.19%
Epoch [181/500], Step [2/2], Loss: 0.8212, Accuracy: 68.41%
Epoch [182/500], Step [2/2], Loss: 0.7836, Accuracy: 71.32%
Epoch [183/500], Step [2/2], Loss: 0.8494, Accuracy: 69.96%
Epoch [184/500], Step [2/2], Loss: 0.7509, Accuracy: 71.90%
Epoch [185/500], Step [2/2], Loss: 0.8929, Accuracy: 64.92%
Epoch [186/500], Step [2/2], Loss: 0.9703, Accuracy: 63.57%
Epoch [187/500], Step [2/2], Loss: 0.9031, Accuracy: 65.31%
Epoch [188/500], Step [2/2], Loss: 0.8962, Accuracy: 69.57%
Epoch [189/500], Step [2/2], Loss: 0.7698, Accuracy: 73.26%
Epoch [190/500], Step [2/2], Loss: 0.7653, Accuracy: 73.45%
Epoch [191/500], Step [2/2], Loss: 0.7678, Accuracy: 73.45%
Epoch [192/500], Step [2/2], Loss: 0.7764, Accuracy: 72.48%
Epoch [193/500], Step [2/2], Loss: 0.8263, Accuracy: 69.77%
Epoch [194/500], Step [2/2], Loss: 0.7865, Accuracy: 70.54%
Epoch [195/500], Step [2/2], Loss: 0.8221, Accuracy: 67.44%
Epoch [196/500], Step [2/2], Loss: 0.8081, Accuracy: 71.71%
Epoch [197/500], Step [2/2], Loss: 0.7749, Accuracy: 70.93%
Epoch [198/500], Step [2/2], Loss: 0.9218, Accuracy: 65.50%
Epoch [199/500], Step [2/2], Loss: 0.8357, Accuracy: 69.19%
Epoch [200/500], Step [2/2], Loss: 0.7215, Accuracy: 72.87%
Epoch [201/500], Step [2/2], Loss: 0.8225, Accuracy: 68.41%
Epoch [202/500], Step [2/2], Loss: 0.7808, Accuracy: 71.51%
Epoch [203/500], Step [2/2], Loss: 0.8032, Accuracy: 71.12%
Epoch [204/500], Step [2/2], Loss: 0.8029, Accuracy: 69.19%
Epoch [205/500], Step [2/2], Loss: 0.7837, Accuracy: 72.09%
Epoch [206/500], Step [2/2], Loss: 0.7925, Accuracy: 69.77%
Epoch [207/500], Step [2/2], Loss: 0.8127, Accuracy: 71.12%
Epoch [208/500], Step [2/2], Loss: 0.7659, Accuracy: 71.32%
Epoch [209/500], Step [2/2], Loss: 0.7715, Accuracy: 74.61%
Epoch [210/500], Step [2/2], Loss: 0.7904, Accuracy: 69.57%
Epoch [211/500], Step [2/2], Loss: 0.8181, Accuracy: 70.16%
Epoch [212/500], Step [2/2], Loss: 0.7732, Accuracy: 72.48%
Epoch [213/500], Step [2/2], Loss: 0.7485, Accuracy: 71.12%
Epoch [214/500], Step [2/2], Loss: 0.8279, Accuracy: 70.74%
Epoch [215/500], Step [2/2], Loss: 0.7789, Accuracy: 72.67%
Epoch [216/500], Step [2/2], Loss: 0.7284, Accuracy: 73.45%
Epoch [217/500], Step [2/2], Loss: 0.7996, Accuracy: 70.35%
Epoch [218/500], Step [2/2], Loss: 0.7819, Accuracy: 71.51%
Epoch [219/500], Step [2/2], Loss: 0.8355, Accuracy: 69.57%
Epoch [220/500], Step [2/2], Loss: 0.8149, Accuracy: 69.38%
Epoch [221/500], Step [2/2], Loss: 0.7368, Accuracy: 69.19%
Epoch [222/500], Step [2/2], Loss: 0.7772, Accuracy: 73.26%
Epoch [223/500], Step [2/2], Loss: 0.7240, Accuracy: 74.03%
Epoch [224/500], Step [2/2], Loss: 0.7679, Accuracy: 70.93%
Epoch [225/500], Step [2/2], Loss: 0.8619, Accuracy: 68.80%
Epoch [226/500], Step [2/2], Loss: 0.8140, Accuracy: 67.25%
Epoch [227/500], Step [2/2], Loss: 0.7995, Accuracy: 71.51%
Epoch [228/500], Step [2/2], Loss: 0.7871, Accuracy: 71.12%
Epoch [229/500], Step [2/2], Loss: 0.7401, Accuracy: 70.16%
Epoch [230/500], Step [2/2], Loss: 0.7294, Accuracy: 73.84%
Epoch [231/500], Step [2/2], Loss: 0.7298, Accuracy: 70.74%
Epoch [232/500], Step [2/2], Loss: 0.7955, Accuracy: 70.54%
Epoch [233/500], Step [2/2], Loss: 0.7799, Accuracy: 69.57%
Epoch [234/500], Step [2/2], Loss: 0.7751, Accuracy: 70.54%
Epoch [235/500], Step [2/2], Loss: 0.8207, Accuracy: 72.29%
Epoch [236/500], Step [2/2], Loss: 0.8137, Accuracy: 68.02%
Epoch [237/500], Step [2/2], Loss: 0.7234, Accuracy: 74.03%
Epoch [238/500], Step [2/2], Loss: 0.7770, Accuracy: 72.48%
Epoch [239/500], Step [2/2], Loss: 0.7499, Accuracy: 73.64%
Epoch [240/500], Step [2/2], Loss: 0.7237, Accuracy: 72.87%
Epoch [241/500], Step [2/2], Loss: 0.6985, Accuracy: 73.45%
Epoch [242/500], Step [2/2], Loss: 0.7584, Accuracy: 72.09%
Epoch [243/500], Step [2/2], Loss: 0.7126, Accuracy: 74.22%
Epoch [244/500], Step [2/2], Loss: 0.7793, Accuracy: 72.87%
Epoch [245/500], Step [2/2], Loss: 0.7809, Accuracy: 70.16%
Epoch [246/500], Step [2/2], Loss: 0.7031, Accuracy: 73.06%
Epoch [247/500], Step [2/2], Loss: 0.7497, Accuracy: 71.71%
Epoch [248/500], Step [2/2], Loss: 0.7462, Accuracy: 71.12%
Epoch [249/500], Step [2/2], Loss: 0.7639, Accuracy: 70.54%
Epoch [250/500], Step [2/2], Loss: 0.8253, Accuracy: 67.25%
Epoch [251/500], Step [2/2], Loss: 0.8694, Accuracy: 67.25%
Epoch [252/500], Step [2/2], Loss: 0.8827, Accuracy: 66.09%
Epoch [253/500], Step [2/2], Loss: 0.8631, Accuracy: 71.71%
Epoch [254/500], Step [2/2], Loss: 0.8630, Accuracy: 63.37%
Epoch [255/500], Step [2/2], Loss: 0.7713, Accuracy: 71.51%
Epoch [256/500], Step [2/2], Loss: 0.7452, Accuracy: 71.12%
Epoch [257/500], Step [2/2], Loss: 0.7172, Accuracy: 74.81%
Epoch [258/500], Step [2/2], Loss: 0.7684, Accuracy: 70.54%
Epoch [259/500], Step [2/2], Loss: 0.7543, Accuracy: 72.87%
Epoch [260/500], Step [2/2], Loss: 0.7634, Accuracy: 70.16%
Epoch [261/500], Step [2/2], Loss: 0.7271, Accuracy: 72.67%
Epoch [262/500], Step [2/2], Loss: 0.7462, Accuracy: 73.84%
Epoch [263/500], Step [2/2], Loss: 0.7875, Accuracy: 72.29%
Epoch [264/500], Step [2/2], Loss: 0.7427, Accuracy: 73.06%
Epoch [265/500], Step [2/2], Loss: 0.7636, Accuracy: 71.12%
Epoch [266/500], Step [2/2], Loss: 0.7419, Accuracy: 72.67%
Epoch [267/500], Step [2/2], Loss: 0.7040, Accuracy: 74.42%
Epoch [268/500], Step [2/2], Loss: 0.6851, Accuracy: 73.45%
Epoch [269/500], Step [2/2], Loss: 0.7038, Accuracy: 72.48%
Epoch [270/500], Step [2/2], Loss: 0.6887, Accuracy: 72.87%
Epoch [271/500], Step [2/2], Loss: 0.7062, Accuracy: 73.45%
Epoch [272/500], Step [2/2], Loss: 0.7401, Accuracy: 72.29%
Epoch [273/500], Step [2/2], Loss: 0.7192, Accuracy: 73.06%
Epoch [274/500], Step [2/2], Loss: 0.7570, Accuracy: 73.64%
Epoch [275/500], Step [2/2], Loss: 0.7321, Accuracy: 71.90%
Epoch [276/500], Step [2/2], Loss: 0.7485, Accuracy: 73.64%
Epoch [277/500], Step [2/2], Loss: 0.7979, Accuracy: 69.38%
Epoch [278/500], Step [2/2], Loss: 0.6712, Accuracy: 75.78%
Epoch [279/500], Step [2/2], Loss: 0.7993, Accuracy: 69.57%
Epoch [280/500], Step [2/2], Loss: 0.7692, Accuracy: 70.35%
Epoch [281/500], Step [2/2], Loss: 0.7053, Accuracy: 74.61%
Epoch [282/500], Step [2/2], Loss: 0.7447, Accuracy: 72.87%
Epoch [283/500], Step [2/2], Loss: 0.8117, Accuracy: 68.41%
Epoch [284/500], Step [2/2], Loss: 0.7254, Accuracy: 72.29%
Epoch [285/500], Step [2/2], Loss: 0.7089, Accuracy: 72.87%
Epoch [286/500], Step [2/2], Loss: 0.7338, Accuracy: 71.71%
Epoch [287/500], Step [2/2], Loss: 0.7030, Accuracy: 74.81%
Epoch [288/500], Step [2/2], Loss: 0.7747, Accuracy: 71.90%
Epoch [289/500], Step [2/2], Loss: 0.7319, Accuracy: 73.45%
Epoch [290/500], Step [2/2], Loss: 0.6685, Accuracy: 76.36%
Epoch [291/500], Step [2/2], Loss: 0.7040, Accuracy: 72.29%
Epoch [292/500], Step [2/2], Loss: 0.7671, Accuracy: 71.90%
Epoch [293/500], Step [2/2], Loss: 0.7530, Accuracy: 71.90%
Epoch [294/500], Step [2/2], Loss: 0.6901, Accuracy: 76.36%
Epoch [295/500], Step [2/2], Loss: 0.7163, Accuracy: 71.71%
Epoch [296/500], Step [2/2], Loss: 0.7370, Accuracy: 71.12%
Epoch [297/500], Step [2/2], Loss: 0.7196, Accuracy: 73.06%
Epoch [298/500], Step [2/2], Loss: 0.7021, Accuracy: 73.45%
Epoch [299/500], Step [2/2], Loss: 0.7210, Accuracy: 74.81%
Epoch [300/500], Step [2/2], Loss: 0.7486, Accuracy: 72.48%
Epoch [301/500], Step [2/2], Loss: 0.7792, Accuracy: 70.35%
Epoch [302/500], Step [2/2], Loss: 0.6580, Accuracy: 77.71%
Epoch [303/500], Step [2/2], Loss: 0.7264, Accuracy: 72.87%
Epoch [304/500], Step [2/2], Loss: 0.7409, Accuracy: 71.90%
Epoch [305/500], Step [2/2], Loss: 0.7559, Accuracy: 72.67%
Epoch [306/500], Step [2/2], Loss: 0.6881, Accuracy: 74.22%
Epoch [307/500], Step [2/2], Loss: 0.7034, Accuracy: 75.78%
Epoch [308/500], Step [2/2], Loss: 0.6647, Accuracy: 74.22%
Epoch [309/500], Step [2/2], Loss: 0.7055, Accuracy: 70.93%
Epoch [310/500], Step [2/2], Loss: 0.7178, Accuracy: 74.42%
Epoch [311/500], Step [2/2], Loss: 0.6913, Accuracy: 74.61%
Epoch [312/500], Step [2/2], Loss: 0.7362, Accuracy: 73.26%
Epoch [313/500], Step [2/2], Loss: 0.7240, Accuracy: 71.32%
Epoch [314/500], Step [2/2], Loss: 0.6727, Accuracy: 75.19%
Epoch [315/500], Step [2/2], Loss: 0.6900, Accuracy: 73.84%
Epoch [316/500], Step [2/2], Loss: 0.6825, Accuracy: 75.78%
Epoch [317/500], Step [2/2], Loss: 0.6402, Accuracy: 75.97%
Epoch [318/500], Step [2/2], Loss: 0.6316, Accuracy: 75.39%
Epoch [319/500], Step [2/2], Loss: 0.7437, Accuracy: 72.87%
Epoch [320/500], Step [2/2], Loss: 0.6687, Accuracy: 75.78%
Epoch [321/500], Step [2/2], Loss: 0.7454, Accuracy: 72.87%
Epoch [322/500], Step [2/2], Loss: 0.7404, Accuracy: 69.77%
Epoch [323/500], Step [2/2], Loss: 0.6714, Accuracy: 75.00%
Epoch [324/500], Step [2/2], Loss: 0.7119, Accuracy: 71.51%
Epoch [325/500], Step [2/2], Loss: 0.7058, Accuracy: 72.09%
Epoch [326/500], Step [2/2], Loss: 0.6646, Accuracy: 75.97%
Epoch [327/500], Step [2/2], Loss: 0.6991, Accuracy: 74.03%
Epoch [328/500], Step [2/2], Loss: 0.6937, Accuracy: 72.67%
Epoch [329/500], Step [2/2], Loss: 0.6985, Accuracy: 74.42%
Epoch [330/500], Step [2/2], Loss: 0.7545, Accuracy: 72.87%
Epoch [331/500], Step [2/2], Loss: 0.7063, Accuracy: 75.19%
Epoch [332/500], Step [2/2], Loss: 0.7497, Accuracy: 72.87%
Epoch [333/500], Step [2/2], Loss: 0.6817, Accuracy: 75.19%
Epoch [334/500], Step [2/2], Loss: 0.6715, Accuracy: 77.33%
Epoch [335/500], Step [2/2], Loss: 0.6586, Accuracy: 73.45%
Epoch [336/500], Step [2/2], Loss: 0.6886, Accuracy: 72.29%
Epoch [337/500], Step [2/2], Loss: 0.6668, Accuracy: 72.87%
Epoch [338/500], Step [2/2], Loss: 0.6976, Accuracy: 73.45%
Epoch [339/500], Step [2/2], Loss: 0.6462, Accuracy: 77.33%
Epoch [340/500], Step [2/2], Loss: 0.6988, Accuracy: 71.90%
Epoch [341/500], Step [2/2], Loss: 0.6660, Accuracy: 74.42%
Epoch [342/500], Step [2/2], Loss: 0.7126, Accuracy: 74.03%
Epoch [343/500], Step [2/2], Loss: 0.7074, Accuracy: 72.87%
Epoch [344/500], Step [2/2], Loss: 0.5889, Accuracy: 77.91%
Epoch [345/500], Step [2/2], Loss: 0.6564, Accuracy: 75.39%
Epoch [346/500], Step [2/2], Loss: 0.7501, Accuracy: 71.90%
Epoch [347/500], Step [2/2], Loss: 0.6453, Accuracy: 74.03%
Epoch [348/500], Step [2/2], Loss: 0.7327, Accuracy: 71.71%
Epoch [349/500], Step [2/2], Loss: 0.7424, Accuracy: 70.16%
Epoch [350/500], Step [2/2], Loss: 0.6705, Accuracy: 74.42%
Epoch [351/500], Step [2/2], Loss: 0.6746, Accuracy: 76.94%
Epoch [352/500], Step [2/2], Loss: 0.6754, Accuracy: 73.64%
Epoch [353/500], Step [2/2], Loss: 0.7147, Accuracy: 72.09%
Epoch [354/500], Step [2/2], Loss: 0.6825, Accuracy: 72.67%
Epoch [355/500], Step [2/2], Loss: 0.6438, Accuracy: 75.19%
Epoch [356/500], Step [2/2], Loss: 0.6545, Accuracy: 74.81%
Epoch [357/500], Step [2/2], Loss: 0.6033, Accuracy: 79.26%
Epoch [358/500], Step [2/2], Loss: 0.6664, Accuracy: 75.00%
Epoch [359/500], Step [2/2], Loss: 0.6894, Accuracy: 74.61%
Epoch [360/500], Step [2/2], Loss: 0.6601, Accuracy: 73.84%
Epoch [361/500], Step [2/2], Loss: 0.6671, Accuracy: 74.61%
Epoch [362/500], Step [2/2], Loss: 0.7628, Accuracy: 71.32%
Epoch [363/500], Step [2/2], Loss: 0.6665, Accuracy: 76.16%
Epoch [364/500], Step [2/2], Loss: 0.6966, Accuracy: 71.90%
Epoch [365/500], Step [2/2], Loss: 0.7443, Accuracy: 70.54%
Epoch [366/500], Step [2/2], Loss: 0.7670, Accuracy: 71.71%
Epoch [367/500], Step [2/2], Loss: 0.7496, Accuracy: 72.48%
Epoch [368/500], Step [2/2], Loss: 0.7793, Accuracy: 71.71%
Epoch [369/500], Step [2/2], Loss: 0.8160, Accuracy: 68.02%
Epoch [370/500], Step [2/2], Loss: 0.7328, Accuracy: 72.48%
Epoch [371/500], Step [2/2], Loss: 0.8198, Accuracy: 69.38%
Epoch [372/500], Step [2/2], Loss: 0.7579, Accuracy: 71.12%
Epoch [373/500], Step [2/2], Loss: 0.7010, Accuracy: 74.81%
Epoch [374/500], Step [2/2], Loss: 0.6814, Accuracy: 73.64%
Epoch [375/500], Step [2/2], Loss: 0.6499, Accuracy: 74.81%
Epoch [376/500], Step [2/2], Loss: 0.6930, Accuracy: 75.00%
Epoch [377/500], Step [2/2], Loss: 0.6887, Accuracy: 71.32%
Epoch [378/500], Step [2/2], Loss: 0.7466, Accuracy: 71.51%
Epoch [379/500], Step [2/2], Loss: 0.7137, Accuracy: 73.26%
Epoch [380/500], Step [2/2], Loss: 0.7447, Accuracy: 73.26%
Epoch [381/500], Step [2/2], Loss: 0.7005, Accuracy: 72.67%
Epoch [382/500], Step [2/2], Loss: 0.7198, Accuracy: 74.03%
Epoch [383/500], Step [2/2], Loss: 0.6632, Accuracy: 75.00%
Epoch [384/500], Step [2/2], Loss: 0.6141, Accuracy: 78.29%
Epoch [385/500], Step [2/2], Loss: 0.6743, Accuracy: 75.00%
Epoch [386/500], Step [2/2], Loss: 0.6143, Accuracy: 76.74%
Epoch [387/500], Step [2/2], Loss: 0.6703, Accuracy: 72.09%
Epoch [388/500], Step [2/2], Loss: 0.6642, Accuracy: 76.55%
Epoch [389/500], Step [2/2], Loss: 0.6580, Accuracy: 73.84%
Epoch [390/500], Step [2/2], Loss: 0.7034, Accuracy: 76.36%
Epoch [391/500], Step [2/2], Loss: 0.6078, Accuracy: 76.94%
Epoch [392/500], Step [2/2], Loss: 0.6860, Accuracy: 73.84%
Epoch [393/500], Step [2/2], Loss: 0.6190, Accuracy: 76.55%
Epoch [394/500], Step [2/2], Loss: 0.6554, Accuracy: 73.26%
Epoch [395/500], Step [2/2], Loss: 0.7590, Accuracy: 70.35%
Epoch [396/500], Step [2/2], Loss: 0.7345, Accuracy: 72.67%
Epoch [397/500], Step [2/2], Loss: 0.6847, Accuracy: 75.39%
Epoch [398/500], Step [2/2], Loss: 0.6906, Accuracy: 73.84%
Epoch [399/500], Step [2/2], Loss: 0.6497, Accuracy: 77.13%
Epoch [400/500], Step [2/2], Loss: 0.5958, Accuracy: 76.36%
Epoch [401/500], Step [2/2], Loss: 0.7199, Accuracy: 72.48%
Epoch [402/500], Step [2/2], Loss: 0.6561, Accuracy: 74.81%
Epoch [403/500], Step [2/2], Loss: 0.6219, Accuracy: 76.74%
Epoch [404/500], Step [2/2], Loss: 0.6512, Accuracy: 75.00%
Epoch [405/500], Step [2/2], Loss: 0.6209, Accuracy: 76.16%
Epoch [406/500], Step [2/2], Loss: 0.6282, Accuracy: 75.58%
Epoch [407/500], Step [2/2], Loss: 0.6261, Accuracy: 76.16%
Epoch [408/500], Step [2/2], Loss: 0.7403, Accuracy: 73.06%
Epoch [409/500], Step [2/2], Loss: 0.6106, Accuracy: 75.58%
Epoch [410/500], Step [2/2], Loss: 0.7022, Accuracy: 73.06%
Epoch [411/500], Step [2/2], Loss: 0.6647, Accuracy: 76.16%
Epoch [412/500], Step [2/2], Loss: 0.6822, Accuracy: 73.26%
Epoch [413/500], Step [2/2], Loss: 0.6592, Accuracy: 73.26%
Epoch [414/500], Step [2/2], Loss: 0.6187, Accuracy: 75.97%
Epoch [415/500], Step [2/2], Loss: 0.6317, Accuracy: 75.97%
Epoch [416/500], Step [2/2], Loss: 0.6629, Accuracy: 74.61%
Epoch [417/500], Step [2/2], Loss: 0.6679, Accuracy: 76.16%
Epoch [418/500], Step [2/2], Loss: 0.6435, Accuracy: 75.78%
Epoch [419/500], Step [2/2], Loss: 0.6570, Accuracy: 73.26%
Epoch [420/500], Step [2/2], Loss: 0.6764, Accuracy: 72.67%
Epoch [421/500], Step [2/2], Loss: 0.6390, Accuracy: 75.97%
Epoch [422/500], Step [2/2], Loss: 0.6098, Accuracy: 76.55%
Epoch [423/500], Step [2/2], Loss: 0.6581, Accuracy: 74.42%
Epoch [424/500], Step [2/2], Loss: 0.5876, Accuracy: 77.71%
Epoch [425/500], Step [2/2], Loss: 0.5983, Accuracy: 78.68%
Epoch [426/500], Step [2/2], Loss: 0.6803, Accuracy: 74.61%
Epoch [427/500], Step [2/2], Loss: 0.7019, Accuracy: 73.06%
Epoch [428/500], Step [2/2], Loss: 0.6848, Accuracy: 75.78%
Epoch [429/500], Step [2/2], Loss: 0.6357, Accuracy: 77.13%
Epoch [430/500], Step [2/2], Loss: 0.6125, Accuracy: 76.16%
Epoch [431/500], Step [2/2], Loss: 0.6342, Accuracy: 74.81%
Epoch [432/500], Step [2/2], Loss: 0.6143, Accuracy: 77.91%
Epoch [433/500], Step [2/2], Loss: 0.6888, Accuracy: 75.00%
Epoch [434/500], Step [2/2], Loss: 0.6403, Accuracy: 78.10%
Epoch [435/500], Step [2/2], Loss: 0.6405, Accuracy: 74.81%
Epoch [436/500], Step [2/2], Loss: 0.6857, Accuracy: 71.90%
Epoch [437/500], Step [2/2], Loss: 0.5877, Accuracy: 79.07%
Epoch [438/500], Step [2/2], Loss: 0.6352, Accuracy: 74.81%
Epoch [439/500], Step [2/2], Loss: 0.7023, Accuracy: 72.29%
Epoch [440/500], Step [2/2], Loss: 0.6612, Accuracy: 73.84%
Epoch [441/500], Step [2/2], Loss: 0.6801, Accuracy: 74.61%
Epoch [442/500], Step [2/2], Loss: 0.7734, Accuracy: 70.35%
Epoch [443/500], Step [2/2], Loss: 0.6497, Accuracy: 75.00%
Epoch [444/500], Step [2/2], Loss: 0.6297, Accuracy: 74.61%
Epoch [445/500], Step [2/2], Loss: 0.6148, Accuracy: 76.94%
Epoch [446/500], Step [2/2], Loss: 0.6688, Accuracy: 74.22%
Epoch [447/500], Step [2/2], Loss: 0.5950, Accuracy: 78.10%
Epoch [448/500], Step [2/2], Loss: 0.6199, Accuracy: 76.36%
Epoch [449/500], Step [2/2], Loss: 0.6804, Accuracy: 72.87%
Epoch [450/500], Step [2/2], Loss: 0.7172, Accuracy: 71.12%
Epoch [451/500], Step [2/2], Loss: 0.6535, Accuracy: 75.78%
Epoch [452/500], Step [2/2], Loss: 0.6431, Accuracy: 74.22%
Epoch [453/500], Step [2/2], Loss: 0.5952, Accuracy: 76.74%
Epoch [454/500], Step [2/2], Loss: 0.6639, Accuracy: 74.42%
Epoch [455/500], Step [2/2], Loss: 0.6061, Accuracy: 76.36%
Epoch [456/500], Step [2/2], Loss: 0.5724, Accuracy: 82.17%
Epoch [457/500], Step [2/2], Loss: 0.6353, Accuracy: 75.58%
Epoch [458/500], Step [2/2], Loss: 0.5856, Accuracy: 79.26%
Epoch [459/500], Step [2/2], Loss: 0.5817, Accuracy: 77.52%
Epoch [460/500], Step [2/2], Loss: 0.5731, Accuracy: 78.49%
Epoch [461/500], Step [2/2], Loss: 0.6110, Accuracy: 75.19%
Epoch [462/500], Step [2/2], Loss: 0.6681, Accuracy: 74.03%
Epoch [463/500], Step [2/2], Loss: 0.6183, Accuracy: 76.74%
Epoch [464/500], Step [2/2], Loss: 0.6677, Accuracy: 73.26%
Epoch [465/500], Step [2/2], Loss: 0.6351, Accuracy: 75.00%
Epoch [466/500], Step [2/2], Loss: 0.5994, Accuracy: 75.78%
Epoch [467/500], Step [2/2], Loss: 0.6188, Accuracy: 77.33%
Epoch [468/500], Step [2/2], Loss: 0.6085, Accuracy: 76.74%
Epoch [469/500], Step [2/2], Loss: 0.6677, Accuracy: 74.03%
Epoch [470/500], Step [2/2], Loss: 0.6068, Accuracy: 77.52%
Epoch [471/500], Step [2/2], Loss: 0.5776, Accuracy: 78.10%
Epoch [472/500], Step [2/2], Loss: 0.5543, Accuracy: 76.94%
Epoch [473/500], Step [2/2], Loss: 0.5608, Accuracy: 80.62%
Epoch [474/500], Step [2/2], Loss: 0.6305, Accuracy: 76.16%
Epoch [475/500], Step [2/2], Loss: 0.6782, Accuracy: 74.22%
Epoch [476/500], Step [2/2], Loss: 0.6328, Accuracy: 76.55%
Epoch [477/500], Step [2/2], Loss: 0.6036, Accuracy: 75.19%
Epoch [478/500], Step [2/2], Loss: 0.5903, Accuracy: 80.81%
Epoch [479/500], Step [2/2], Loss: 0.5967, Accuracy: 78.10%
Epoch [480/500], Step [2/2], Loss: 0.5995, Accuracy: 77.91%
Epoch [481/500], Step [2/2], Loss: 0.6714, Accuracy: 75.58%
Epoch [482/500], Step [2/2], Loss: 0.6248, Accuracy: 75.19%
Epoch [483/500], Step [2/2], Loss: 0.6573, Accuracy: 74.61%
Epoch [484/500], Step [2/2], Loss: 0.7030, Accuracy: 73.45%
Epoch [485/500], Step [2/2], Loss: 0.5759, Accuracy: 77.13%
Epoch [486/500], Step [2/2], Loss: 0.5942, Accuracy: 77.71%
Epoch [487/500], Step [2/2], Loss: 0.5839, Accuracy: 77.52%
Epoch [488/500], Step [2/2], Loss: 0.6040, Accuracy: 79.07%
Epoch [489/500], Step [2/2], Loss: 0.5674, Accuracy: 79.65%
Epoch [490/500], Step [2/2], Loss: 0.5903, Accuracy: 76.74%
Epoch [491/500], Step [2/2], Loss: 0.6576, Accuracy: 75.78%
Epoch [492/500], Step [2/2], Loss: 0.6167, Accuracy: 78.29%
Epoch [493/500], Step [2/2], Loss: 0.6482, Accuracy: 76.55%
Epoch [494/500], Step [2/2], Loss: 0.5987, Accuracy: 76.94%
Epoch [495/500], Step [2/2], Loss: 0.6854, Accuracy: 75.78%
Epoch [496/500], Step [2/2], Loss: 0.6645, Accuracy: 76.55%
Epoch [497/500], Step [2/2], Loss: 0.6335, Accuracy: 75.97%
Epoch [498/500], Step [2/2], Loss: 0.6289, Accuracy: 76.55%
Epoch [499/500], Step [2/2], Loss: 0.6836, Accuracy: 73.64%
Epoch [500/500], Step [2/2], Loss: 0.5896, Accuracy: 76.94%

The training accuracy usually starts out below \(10\%\), which makes sense as chance \(100/14 \approx 7\%\). It usually quickly goes up to around \(80\%\). The model can get better if you run more epochs, but this is sufficient for now.

Now we want to see performance on the test data set:

# Test the model
model.eval()
real_labels, predicted_labels = [], []
with torch.no_grad():
  correct = 0
  total = 0
  for motions, labels in test_loader:
    motions, labels = motions.to(device), labels.to(device)
    real_labels += list(labels)
    outputs = model(motions)
    _, predicted = torch.max(outputs.data, 1)
    predicted_labels += list(predicted)
    total += labels.size(0)
    correct += (predicted == labels).sum().item()

  print(f"Test Accuracy of the model on the 172 test moves: {(correct / total)*100:.3f}%")
Test Accuracy of the model on the 172 test moves: 74.419%

Considering that we have a relatively small data set, and a fairly simple model that didn’t really converge, this is decent performance (chance is ~7%). Let’s look at where the errors are:

# plt.style.use('dark_background')  # uncomment if you're using dark mode...
plotConfusionMatrix(real_labels, predicted_labels, label_names)
../../_images/Example_Deep_Learning_Project_49_0.png

The errors vary each time the model is run, but a common error seems to be that head scratching is predicted from some other movements that also involve arms a lot: throw/catch, hand clapping, phone talking, checking watch, hand waving, taking photo. If we train the model longer, these errors tend to go away as well. For some reason, crossed legged sitting is sometimes misclassified for crawling, but this doesn’t always happen.


Step 8: Modeling completion

Are we done yet? In order to answer our questions, reach our goals and evaluate our hypotheses we need to be able to get test performance from the model, and might want to investigate any errors that the model makes. We will first make a function that fits the model on a specified set of joints

def testJointModel(joints=list(range(24)),
                   num_epochs = 500,
                   batch_size=516,
                   learning_rate = 0.001):

  # Hyperparameters
  num_classes = 14

  # Create training and test datasets
  movi_train = MoViJointDataset(train  = True, joints = joints)
  movi_test  = MoViJointDataset(train  = False, joints = joints)

  # Data loaders
  train_loader = DataLoader(dataset=movi_train, batch_size=batch_size, shuffle=True)
  test_loader  = DataLoader(dataset=movi_test, batch_size=batch_size,  shuffle=False)

  # create the model object:
  model = Mov1DCNN(njoints=len(joints)).to(device)

  # loss and optimizer:
  criterion = nn.CrossEntropyLoss()
  optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

  # Train the model
  for epoch in range(num_epochs):
    for i, (motions, labels) in enumerate(train_loader):
      motions, labels = motions.to(device), labels.to(device)

      # Run the forward pass
      outputs = model(motions)
      loss = criterion(outputs, labels)
      loss_list.append(loss.item())

      # Backprop and perform Adam optimisation
      optimizer.zero_grad()
      loss.backward()
      optimizer.step()

      # Track the accuracy
      total = labels.size(0)

  # Test the model
  model.eval()
  real_labels, predicted_labels = [], []
  with torch.no_grad():
    correct = 0
    total = 0
    for motions, labels in test_loader:
      motions, labels = motions.to(device), labels.to(device)
      real_labels += list(labels)
      outputs = model(motions)
      _, predicted = torch.max(outputs.data, 1)
      predicted_labels += list(predicted)
      total += labels.size(0)
      correct += (predicted == labels).sum().item()

  performance = (correct / total) * 100

  return {'performance': performance,
          'real_labels': real_labels,
          'predicted_labels': predicted_labels}

Let’s test this on a few select joints:

This takes up to ~10 seconds:

cnn6j = testJointModel(joints=[0, 10, 11, 15, 22, 23])
print(cnn6j['performance'])
plotConfusionMatrix(real_labels = cnn6j['real_labels'], predicted_labels = cnn6j['predicted_labels'], label_names=label_names)
70.34883720930233
../../_images/Example_Deep_Learning_Project_54_1.png

That is some pretty good performance based on only 6 / 24 joints!

  • Can we answer our question? –> YES, we can classify movement, and we can do so based on a sub-set of joints.

  • Have we reached our goals? –> YES, this pilot study shows that we can decode movement type based on skeletal joint motion data.

  • Can we evaluate our hypotheses? –> YES, we can now test the specific model performances and compare them.

Good news, looks like we’re done with a first iteration of modeling!


Step 9: Model evaluation

We can now see how well our model actual does, by running it to test our hypotheses. To test our hypotheses, we will group the joints into limbs:

limb_joints = {'Left Leg': [1, 4, 7, 10],
               'Right Leg': [2, 5, 8, 1],
               'Left Arm': [13, 16, 18, 20, 22],
               'Right Arm': [14, 17, 19, 21, 23],
               'Torso': [0, 3, 6, 9],
               'Head': [12, 15]}

Our second hypothesis was that since our participants are right handed, the right arm will give as better classification performance than the left arm. We will fit the model on each individual limb, and then we can compare performance on the left and right arm.

This should take up to ~1 minute!

limb_fits = {}
for limb in limb_joints.keys():
  print(f"\n*** FITTING: {limb}")

  joints = limb_joints[limb]
  limb_fit = testJointModel(joints=joints)
  limb_fits[limb] = limb_fit
  print(f"limb performance: {limb_fit['performance']:.2f}%")
*** FITTING: Left Leg
limb performance: 74.42%

*** FITTING: Right Leg
limb performance: 66.28%

*** FITTING: Left Arm
limb performance: 58.72%

*** FITTING: Right Arm
limb performance: 40.12%

*** FITTING: Torso
limb performance: 79.65%

*** FITTING: Head
limb performance: 52.91%

Every time we run this, we get something along these lines:

*** FITTING: LeftLeg
limb performance: 65.70%
*** FITTING: RightLeg
limb performance: 50.58%
*** FITTING: LeftArm
limb performance: 37.21%
*** FITTING: RightArm
limb performance: 22.09%
*** FITTING: Torso
limb performance: 73.84%
*** FITTING: Head
limb performance: 39.53%

For a formal test, you’d fit each model a number of times and let it converge by using many more epochs. We don’t really have time for that here, but the pattern is fairly clear already. The head and arms are the worst, the legs are better, and the torso usually wins!

The left arm seems to outperform the right arm in classifying movements. That was not what we expected. Maybe we should repeat this with left-handed participants to see if their right arm works better?

We still want to test our first hypothesis, which we’re not so certain about any more, given the performance above: the torso outperforms the other limbs. But that doesn’t mean that a model with arms and legs only is necessarily worse than a model with arms, legs and head and torso as well.

We will test each of these models six times, and take the median performance.

This takes up to ~4 minutes! (About 2 minutes per kind of model.)

limb_sets = {'limbs only':['Left Leg', 'Right Leg', 'Left Arm', 'Right Arm'],
             'limbs+torso+head':['Left Leg', 'Right Leg', 'Left Arm',
                                 'Right Arm', 'Torso', 'Head']}

for limb_set in limb_sets.keys():
  print(f"\n*** FITTING: {limb_set}")

  limbs = limb_sets[limb_set]

  joints = []
  for limb in limbs:
    joints += limb_joints[limb]

  performances = []
  for repeat in range(6):
    limb_set_fit = testJointModel(joints=joints)
    performances.append(limb_set_fit['performance'])
    print(f"performance: {limb_set_fit['performance']:.2f}%")

  print(f"median performance: {(np.median(performances)):.2f}%")
*** FITTING: limbs only
performance: 72.67%
performance: 75.58%
performance: 70.35%
performance: 69.77%
performance: 79.07%
performance: 77.33%
median performance: 74.13%

*** FITTING: limbs+torso+head
performance: 74.42%
performance: 83.72%
performance: 76.74%
performance: 71.51%
performance: 75.58%
performance: 88.37%
median performance: 76.16%

The models are not converging, or perfect, but almost every time we run this cell the extra information from the torso and head do make the model perform a little better.

It seems that our spine is pretty fundamental for movement!

Maybe we should see how well we can do with a minimal number of joints measured. Can we go as low as 1 joint? For example, since we usually carry a phone in our pocket, the inertal motion units (IMU, i.e. accelerometers + gyroscopes) on a phone might be sufficient to get us some idea of movements people are making. We could test individual joints as well, or combinations of 2 or 3 joints.

Of course, in real life people make many more types of movements, so we might need more joints or IMU’s for decent classification. It will also be a problem to figure out when one movement type has ended and the next has begun.


Step 10: publication

Let’s write a simple abstract following the guidelines…

A. What is the phenomena? Here summarize the part of the phenomena which your modeling addresses.

Movement is well characterized by angular joint information.

B. What is the key scientific question?: Clearly articulate the question which your modeling tries to answer.

Here, we ask how many joints are needed to accurately classify movements, and which joints are the most informative for classification.

C. What was our hypothesis?: Explain the key relationships which we relied on to simulate the phenomena.

We hypothesized that limb motion was more informative than torso motion; and we hypothesized that right side limbs carry more information about movement types than left side limbs.

D. How did your modeling work? Give an overview of the model, it’s main components, and how the modeling works. ‘’Here we … ‘’

To investigate these hypotheses, we constructed a simple 1d convolutional neuroal network (CNN) and trained it on different subsets of the publicly available MoVi dataset.

E. What did you find? Did the modeling work? Explain the key outcomes of your modeling evaluation.

Contrary to our expectations, we observed that the torso was more informative for classification then the rest of the joints. Furthermore the left limbs allowed for better classification than the right limbs.

F. What can you conclude? Conclude as much as you can with reference to the hypothesis, within the limits of the modeling.

We conclude that while our model works to classify movements from subsets of joint rotations, the specific subsets of joints that were most informative were counter to our intuition.

G. What are the limitations and future directions? What is left to be learned? Briefly argue the plausibility of the approach and/or what you think is essential that may have been left out.

Since our dataset contained limited number of movement types, generalization might be limited. Furthermore, our findings might be specific for our particular choice of model. Finally, classification of continuous movement presents an additional challenge since we used already segmented motion data here.

If we put this all in one paragraph, we have our final complete abstract. But, first, do not include the letters in your abstract, and second, you might need to paraphrase the answers a little so they fit together.


Abstract

(A) Movement is well characterized by angular joint information. (B) Here, we ask how many joints are needed to accurately classify movements, and which joints are the most informative for classification. (C) We hypothesized that limb motion was more informative than torso motion; and we hypothesized that right side limbs carry more information about movement types than left side limbs. (D) To investigate these hypotheses, we constructed a simple 1d convolutional neuroal network (CNN) and trained it on different subsets of the publicly available MoVi dataset. (E) Contrary to our expectations, we observed that the torso was more informative for classification then the rest of the joints. Furthermore the left limbs allowed for better classification than the right limbs. (F) We conclude that while our model works to classify movements from subsets of joint rotations, the specific subsets of joints that were most informative were counter to our intuition. (G) Since our dataset contained limited number of movement types, generalization might be limited. Furthermore, our findings might be specific for our particular choice of model. Finally, classification of continuous movement presents an additional challenge since we used already segmented motion data here.