\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Least Squares Linear Regressors
Real Inner Products
Needed by:
Norm Weighted Least Squares Linear Regressors
Links:
Sheet PDF
Graph PDF

Weighted Least Squares Linear Regressors

Why

What is the best linear regressor if we choose according to a weighted squared loss function.

Definition

Suppose we have a paired dataset of $n$ records with inputs in $\R ^d$ and outputs in $\R $. A weighted least squares linear predictor for nonnegative weights $w \in \R ^n$, $w \geq 0$, is a linear transformation $f: \R ^d \to \R $ (the field is $\R $) which minimizes

\[ \frac{1}{n} \sum_{i = 1}^{n} w_i(y_i - x^\top a^i)^2. \]

Some authors refer to this process of selecting a linear predictor as the weighted least-squares problem.

Define $W \in \R ^{n \times n}$ so that $W_{ii} = w_i$ and $W_{ij} = 0$ when $i \neq j$. So, in particular, $W$ is a diagonal matrix. We want to find $x$ to minimize

\[ \normm{W(Ax - y)} \]

Solution

There exists a unique weighted least squares linear predictor and its parameters are given by

\[ \inversep{\transpose{A}W\transpose{A}}\transpose{A}Wy. \]

Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view