Source: texcla/models/sequence_encoders.py#L0
SequenceEncoderBase
SequenceEncoderBase.__init__
__init__(self, dropout_rate=0.5)
Creates a new instance of sequence encoder.
Args:
- dropout_rate: The final encoded output dropout.
YoonKimCNN
YoonKimCNN.__init__
__init__(self, num_filters=64, filter_sizes=[3, 4, 5], dropout_rate=0.5, **conv_kwargs)
Yoon Kim's shallow cnn model: https://arxiv.org/pdf/1408.5882.pdf
Args:
- num_filters: The number of filters to use per
filter_size
. (Default value = 64) - filter_sizes: The filter sizes for each convolutional layer. (Default value = [3, 4, 5])
**cnn_kwargs: Additional args for building the
Conv1D
layer.
AlexCNN
AlexCNN.__init__
__init__(self, num_filters=20, filter_sizes=[3, 8], dropout_rate=[0.5, 0.8], hidden_dims=20, \
**conv_kwargs)
Alexander Rakhlin's CNN model: https://github.com/alexander-rakhlin/CNN-for-Sentence-Classification-in-Keras/
Args:
- num_filters: The number of filters to use per
filter_size
. (Default value = 64) - filter_sizes: The filter sizes for each convolutional layer. (Default value = [3, 4, 5])
- dropout_rate: Array for one dropout layer after the embedding and one before the final dense layer (Default value = [0.5, 0.8])
StackedRNN
StackedRNN.__init__
__init__(self, rnn_class=<class 'keras.layers.recurrent.LSTM'>, hidden_dims=[50, 50], \
bidirectional=True, dropout_rate=0.5, **rnn_kwargs)
Creates a stacked RNN.
Args:
- rnn_class: The type of RNN to use. (Default Value = LSTM)
- encoder_dims: The number of hidden units of RNN. (Default Value: 50)
- bidirectional: Whether to use bidirectional encoding. (Default Value = True) **rnn_kwargs: Additional args for building the RNN.
BasicRNN
BasicRNN.__init__
__init__(self, rnn_class=<class 'keras.layers.recurrent.LSTM'>, hidden_dims=50, \
bidirectional=True, dropout_rate=0.5, **rnn_kwargs)
Creates a stacked RNN.
Args:
- rnn_class: The type of RNN to use. (Default Value = LSTM)
- encoder_dims: The number of hidden units of RNN. (Default Value: 50)
- bidirectional: Whether to use bidirectional encoding. (Default Value = True) **rnn_kwargs: Additional args for building the RNN.
AttentionRNN
AttentionRNN.__init__
__init__(self, rnn_class=<class 'keras.layers.recurrent.LSTM'>, encoder_dims=50, \
bidirectional=True, dropout_rate=0.5, **rnn_kwargs)
Creates an RNN model with attention. The attention mechanism is implemented as described in https://www.cs.cmu.edu/~hovy/papers/16HLT-hierarchical-attention-networks.pdf, but without sentence level attention.
Args:
- rnn_class: The type of RNN to use. (Default Value = LSTM)
- encoder_dims: The number of hidden units of RNN. (Default Value: 50)
- bidirectional: Whether to use bidirectional encoding. (Default Value = True) **rnn_kwargs: Additional args for building the RNN.
AveragingEncoder
AveragingEncoder.__init__
__init__(self, dropout_rate=0)
An encoder that averages sequence inputs.