\(\DeclarePairedDelimiterX{\Set}[2]{\{}{\}}{#1 \nonscript\;\delimsize\vert\nonscript\; #2}\) \( \DeclarePairedDelimiter{\set}{\{}{\}}\) \( \DeclarePairedDelimiter{\parens}{\left(}{\right)}\) \(\DeclarePairedDelimiterX{\innerproduct}[1]{\langle}{\rangle}{#1}\) \(\newcommand{\ip}[1]{\innerproduct{#1}}\) \(\newcommand{\bmat}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\barray}[1]{\left[\hspace{2.0pt}\begin{matrix}#1\end{matrix}\hspace{2.0pt}\right]}\) \(\newcommand{\mat}[1]{\begin{matrix}#1\end{matrix}}\) \(\newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}}\) \(\newcommand{\mathword}[1]{\mathop{\textup{#1}}}\)
Needs:
Neural Networks
Similarity Functions
Needed by:
Variational Autoencoders
Links:
Sheet PDF
Graph PDF
Wikipedia

Autoencoders

Why

1

Definition

A neural network $\nu $ commutes with a neural network $\mu $ if their associated predictors commute as functions.

An autoencoder (or feedforward autoencoder) is a pair of neural networks $((\phi _1, \dots , \phi _k), (\psi _1, \dots , \psi _\ell ))$. If the networks commute and $\dom \phi _1 = \dom \psi _\ell $, we call the autoencoder regular. We call the predictor of the first network the encoder and the predictor the second network the decoder. We call the image of an input to the encoder an embedding (or feature vector, representation, code).

Compressive autoencoders

Let $(\phi , \psi )$ be regular and let $f: \R ^d \to \R ^k$ be the encoder and $g: \R ^k \to \R ^d$ be the decoder. If $k < d$, we call the autoencoder compressive. Otherwise, we call the autoencoder noncompressive. An autoencoder is perfect if $g \circ f$ is the identity function. Clearly, a compressive autoencoder can not be perfect.

Let us relax our notion of perfect by introducing a similarity function $\ell : \R ^d \times \R ^d \to \R $ (see Similarity Functions). An autoencoder is optimal with respect to $\ell $ if it minimizes $\int_{\R ^d} \ell (g(f(z)), z) dz$. This integral may diverge. Even if it converges for some autoencoders, there may not be an optimal autoencoder, or a unique one.

If we parameterize a family of autoencoders $\set{x_{\theta }}_{\theta \in \Theta }$ by a compact set $\Theta $, ...2

It is natural to be interested in compressive autoencoders.


  1. Future editions will include. Future editions may also change the name of this sheet. ↩︎
  2. Future editions will continue. ↩︎
Copyright © 2023 The Bourbaki Authors — All rights reserved — Version 13a6779cc About Show the old page view