What is LSTM.transpose()?

STM.transpose() is an experiment with an unfolded version of LSTMs. The hypothesis is that The gradients of a deep Neural Network following the same architecture of the LSTM unfolded through time (even those of the bottom layers) are efficiently trainable with Backpropagation, and won’t be affected by the ‘vanishing gradient’ problem. This is the case even when the weights are not ‘tied’

Code and references

The code is public and would like to have more people collaborate with us to make it better.

Project Details

Date: Nov 5, 2016

Author: John Gamboa

Categories: projectrnn


Related Works.

Cutting the Error by Half: Investigation of Very Deep CNN and Advanced Training Strategies for Document Image Classification

TAC-GAN - Text Conditioned Auxiliary Classifier Generative Adversarial Network

Multilevel Context Representation for Improving Object Recognition

Music Information Retrieval

Transforming Sensor Data to the Image Domain for Deep Learning - an Application to Footstep Detection


We believe that creativity comes with freedom and to solve challenging problems in AI we need to have freedom and creative thinking. We provide an unconstrained environment to highly motivated students to do whatever that comes to their minds and explore deep learning.

Social Links

Our Bunker

Rooms 36-(307/309/310),
TU Kaiserslautern,
Paul-Ehrlich-Str. 36,
67663 Kaiserslautern,