LSTM.transpose()

What is LSTM.transpose()?

STM.transpose() is an experiment with an unfolded version of LSTMs. The hypothesis is that The gradients of a deep Neural Network following the same architecture of the LSTM unfolded through time (even those of the bottom layers) are efficiently trainable with Backpropagation, and won’t be affected by the ‘vanishing gradient’ problem. This is the case even when the weights are not ‘tied’

Code and references

The code is public and would like to have more people collaborate with us to make it better.

Project Details

Date: Nov 5, 2016
Author: John Gamboa
Tags: LSTM, architecture, memory-networks
Website: https://github.com/vaulttech/lstmJam

Related Projects

About

At MindGarage, we believe that creativity and innovation are essential for advancing the field of Artificial Intelligence. That's why we provide an open and unconstrained environment for highly motivated students to explore the possibilities of Deep Learning. We encourage freedom of thought and creativity in tackling challenging problems, and we're always on the lookout for talented individuals to join our team. If you're passionate about AI and want to contribute to groundbreaking research in Deep Learning, we invite you to learn more about our lab and our projects.

Contact

Gottlieb-Daimler-Str. 48 (48-462),
67663 Kaiserslautern
Germany


Copyright © 2023 RPTU. All rights reserved.

Contact | Imprint | Privacy Policy