Preserving Activations in Recurrent Neural Networks Based on Surprisal

Neurocomputing, doi:10.1016/j.neucom.2018.11.092 - Feb 2019. Open Access
Associated documents : AAW19.pdf [749Ko]   http://dx.doi.org/10.1016/j.neucom.2018.11.092
Learning hierarchical abstractions from sequences is a challenging and open problem for Recurrent Neural Networks (RNNs). This is mainly due to the difficulty of detecting features that span over long time distances with also different frequencies. In this paper, we address this challenge by introducing surprisal-based activation, a novel method to preserve activations and skip updates depending on encoding-based information content. The preserved activations can be considered as temporal shortcuts with perfect memory. We present a preliminary analysis by evaluating surprisal-based activation on language modelling with the Penn Treebank corpus and find that it can improve performance when compared to baseline RNNs and Long Short-Term Memory (LSTM) networks.

 

@Article{AAW19,
  author       = "Alpay, Tayfun and Abawi, Fares and Wermter, Stefan",
  title        = "Preserving Activations in Recurrent Neural Networks Based on Surprisal",
  journal      = "Neurocomputing",
  month        = "Feb",
  year         = "2019",
  publisher    = "Elsevier",
  doi          = "10.1016/j.neucom.2018.11.092",
  url          = "https://www2.informatik.uni-hamburg.de/wtm/publications/2019/AAW19/AAW19.pdf"
}

» Tayfun Alpay
» Fares Abawi
» Stefan Wermter