Weekly exercises

Here you find the weekly exercises for the coming week, as well as an overview of exercises for past weeks. The week number refers to when the exercises will be discussed in the exercise class.

To learn the material well, it is important to spend time and make real efforts on trying to solve the exercises, preferably before the exercise class.

Some of the exercises are from the book James et al 2013: An Introduction to Statistical Learning, and will be referred to as ISLR

Note that the link to book website given in the book does not work any more; the new link is https://hastie.su.domains/ElemStatLearn/.

A link to the extra exercises is here.

Exercises the coming week

Week 17

Exam 2020: Problems d) and e).?

Exercises for past weeks

Week 16

Extra exercises 12 and 13. You find the zip_nn.R file with examples on the use of nnet() and mlp() on the 2022 page: /studier/emner/matnat/math/STK2100/v22/r-scripts/.

Exam 2021: Problem 3

  • Solutions: see solutions from the previous week.

?

Week 15

Exam 2021: Problems 1, 2

Modified version of extra exercise 10: The purpose in this exercise is to reproduce Figures 8.6 and 8.8 of ISLR. The data set to be used in the analysis is the Heart data, store in the file Heart.csv in the data folder of the course. Here are the modified exercises:

a) Do as written in the exercise.

b) Reproduce Figure 8.6 of ISLR, but use r-code-week11.R from the lecture on March 19 as inspiration.

c) Do as written in the exercise, but to get the black curve, use mtry=p (i.e. do splits based on all covariates).

d) Do as written in the exercise, and additionally, you may also use r-code-week14.R from the lecture on April 8 as inspiration.

e) Add addtional curves to the plot, by also fitting gradient boosted trees, with possibly different tree sizes, using r-code-week13.R from the lecture on March 26 as inspiration. Here, you only need to compute the test error.

?

Week 13

Exercises from the book: 9.6 (in b only fit a tree). The ozone data are available here with some info here

Exercises from ISLR: Exercise 8.4, 8.8a-c,e

Exam 2019: Exercise 1e (this was given by mistake last week before you had learned about decision trees, sorry about that!)

  • Solution: can be found under last week.?

?

Week 12

Exam STK2100 2019: Exercises 1a,d,e and 3

Exam STK2100 2022: Exercise 1 (in d you do not need to consider the three last methods in the table)

?

Week 11

Exercises from the book: 5.4 and 5.7

Exercises from ISLR: Exercise 7.1, 7.9

  • Solutions: 7.1. Note that there are some similarities between ISLR 7.1 and ESL Ex. 5.1 from week 11. For 7.9, see the R code examples in ISLR Ch. 7.8 Lab: Non-linear Modeling.

Extra exercise 7

?

Week 10

Exercises from the book: 4.2 and 5.1

  • Solutions: 4.2 and 5.1 or see the solution manual (but note that in 4.2 b)-c), \(\mu_1, \mu_2\) should be \(\hat{\mu}_1, \hat{\mu}_2\).

Exercises from ISLR: 4.9 and 4.14 (a-f)

  • Solutions: 4.9 and see Chapter 4.7 Lab: Classification Methods in the ISLR book for R-code-examples similar to 4.14.

Exam STK2100 2018: Problem 2

Extra exercise 6

?

Week 9

Exercises from the book: 4.1

  • Solution: Berit's suggested solution (similar to the one in the solution manual, but with an argument for why the solution is the eigenvector corresponding to the largest eigenvalue of \(\mathbf{W}^{-1} \mathbf{B}\))

Exercise from ISLR: 4.13 (without KNN)

Extra exercise: Modify the example with principal component (PC) regression in the R script r-code-week7.R, so that the numbers of principal components are selected through cross-validation instead of through separate training and test sets. Comment on the results.

?

Week 8

Exercises from the book: 3.2 and 3.29:

Exercises from ISLR: 3.9 a)-c) and e)-f)

  • Solutions: See here.

?

Week 7

Exercises from ISLR: 3.3. 3.4, 3.6, and 3.7

  • Solutions: See here.

Extra exercises 4 and 5 (you will need this: extra4.r)

?

Week 6

Exercises from ISLR: 3.8 and 3.5

Extra exercises (see link above): 1, 2 and 3?

?

Week 5

Exercises from the book: 2.7

  • Solutions: See the slides from the exercise session here. Alternatively, see pages 8-9 here, but note a few typos. In (a) for linear regression: \(\hat\beta\) should be \((X^TX)^{-1}X^Ty\) and not \(X(X^TX)^{-1}X^Ty\). For k-NN, a \(y_i\) is missing in the expression. In (b) you should add and subtract \( \mathrm{E}_{\mathcal{Y}|\mathcal{X}}[\hat{f}(x_0)] \) instead of \( \mathrm{E}_{\mathcal{Y}|\mathcal{X}}[f(x_0)] \), and similarly with the unconditional expectation in exercise (c).

Exercises from ISLR: 2.1, 2.2, and 2.8 (see the webpage for the book for downloading data; the easiest alternative is to install the ISLR library (through the command install.packages("ISLR")), make the library available (through the library("ISLR")), and then make the data available through data(College)).

?

Publisert 21. jan. 2026 12:56 - Sist endret 20. apr. 2026 13:04