Open In Colab   Open in Kaggle

Modeling Steps 7 - 9#

By Neuromatch Academy

Content creators: Marius ‘t Hart, Megan Peters, Paul Schrater, Jean Laurens, Gunnar Blohm

Production editors: Spiros Chavlis


Step 7: Implementing the model#

Video 8: Implementing the modeling#

This is the step where you finally start writing code! Separately implement each box, icon, or flow relationship identified in Step 6. Test each of those model components separately! (This is called a unit test). Unit testing ensures that each model components works are expected/planned.

Guiding principles:

  • Start with the easiest possible implementation

    • Test functionality of model after each step before adding new model components (unit tests)

    • Simple models can sometimes accomplish surprisingly much…

  • Add / remove different model elements

    • Gain insight into working principles

    • What’s crucial, what isn’t?

    • Every component of the model must be crucial!

  • Make use of tools to evaluate model behavior

    • E.g., graphical analysis, changing parameter sets, stability / equilibrium analyses, derive general solutions, asymptotes, periodic behaviour, etc.

Goal: Understand how each model component works in isolation and what the resulting overall model behavior is.

Note: if you’re working with data, this step might also involve significant data wrangling to your dataset into a format usable by your model…

Make sure to avoid the pitfalls!

Click here for a recap on pitfalls
  1. Building the whole model at once without testing components

  • you will make mistakes. Debug model components as you go!

  • debugging a complex model is close to impossible. Is it not woring because individual components are not working? Or do components not “play nice” together?

  1. Not testing if individual components are important

  • It’s easy to add useless components to a model. They will be distracting for you and for readers

  1. Not testing model functionality step by step as you build up the model

  • You’d be surprised by what basic components often can alrealy achieve…

    • e.g. our intuition is really bad when it comes to dynamical systems

  1. Not using standard model testing tools

  • each field has developped specific mathematical tools to test model behaviors. You’ll be expected to show such evaluations. Make use of them early on!


Step 8: Completing the model#

Video 9: Completing the modeling#

Determing what you’re done modeling is a hard question. Referring back to your original goals will be crucial. This is also where a precise question and specific hypotheses expressed in mathematical relationships come in handy.

Note: you can always keep improving our model, but at some point you need to decide that it is finished. Once you have a model that displays the properties of a system you are interested in, it should be possible to say something about your hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question.

Guiding principles:

  • Determine a criterion

  • Refer to steps 1 (goals) and 4 (hypotheses)

    • Does the model answer the original question sufficiently?

    • Does the model satisfy your own evaluation criteria?

    • Does it speak to the hypotheses?

  • Can the model produce the parametric relationships hypothesized in step 4?

Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.

Goal: Determine if you can answer your original research question and related hypotheses to your satisfaction. If the original goal has not been met you need to go back to the drawing board!

Make sure to avoid the pitfalls!

Click here for a recap on pitfalls
  1. Forgetting to specify or use a criterion for model completion (in Step 1!)

  • This is crucial for you not to get lost in an endless loop of model improvements

  1. Thinking the model can answer your question / hypotheses without checking

  • always check if all questions and hypotheses can be answered / tested

  • you will fail on your own benchmarks if you neglect this

  1. You think you should further improve the model

  • This is only warranted if your model cannot answer your hypotheses / questions and/or meet your goals

  • remember: you can always improve a model, but you want to focus on the question / hypotheses / goals at hand!


Step 9: testing and evaluating the model#

Video 10: Evaluating the modeling#

Every models needs to be evaluated quantitatively. There are many ways to achieve that and not every model should be evaluated in the same way. Ultimately, model testing depends on what your goals are and what you want to get out of the model, e.g. qualitative vs quantitative fit to data.

Guiding principles:

  • By definition a model is always wrong!

    • Determine upfront what is “right enough” for you

  • Ensure the explicit interfacing with current or future data

    • model answers the questions/hypotheses/goals with a sufficient amount of detail

  • Quantitative evaluation methods

    • Statistics: how well does the model fit data?

    • Predictability: does the model make testable predictions?

    • Breadth: how general is the model?

  • Comparison against other models (BIC, AIC, etc.)

    • This is often not easy to do in a fair way… Be nice and respectful to other models.

  • Does the model explain previous data? (this is called the subsumption principle in physics!)

  • A good model should provide insight that could not have been gained or would have been hard to uncover without the model

  • Remember, a model is a working hypotheses; a good model should be falsifiable!

Goal: You want to demonstrate that your model works well. You also want to make sure you can interpret the model’s meaning and findings. I.e. what did the model allow you to learn that was not apparent from the data alone?

Make sure to avoid the pitfalls!

Click here for a recap on pitfalls
  1. Thinking your model is bad

  • does it answer the question / hypotheses and meet your goals? Does it provide the leverl of explanation and insights you set out to gain? Then it’s probably good enough!

  1. Not providing any quantitative evaluation

  • Just do it, it’s something that’s expected

  1. Not thinking deeply about your model and what you can learn from it

  • this is likely the most important pitfall. You want to learn as much as you can from a model, especially about aspects that you cannot gain from the data alone

  • model interpretation can open new avenues for research and experimentation. That’s part of why we want to model in the first place! A model is a hypothesis!

Ethics#

Time to rethink the ethical implications of your model!

  • did anything change since Step 1?

  • did you learn about new ethical issues?

  • does the modeling outcome have new unanticipated ethical consequences?

  • did you make sure you evaluated your model in an unbiased way?