资源简介

该代码为基于RNN的Tensorflow实现文本分类任务的注意力机制,笔者亲测有效,不需要环境配置等,欢迎大家下载。

资源截图

代码片段和文件信息

import tensorflow as tf


def attention(inputs attention_size time_major=False return_alphas=False):
    “““
    Attention mechanism layer which reduces RNN/Bi-RNN outputs with Attention vector.

    The idea was proposed in the article by Z. Yang et al. “Hierarchical Attention Networks
     for Document Classification“ 2016: http://www.aclweb.org/anthology/N16-1174.
    Variables notation is also inherited from the article
    
    Args:
        inputs: The Attention inputs.
            Matches outputs of RNN/Bi-RNN layer (not final state):
                In case of RNN this must be RNN outputs ‘Tensor‘:
                    If time_major == False (default) this must be a tensor of shape:
                        ‘[batch_size max_time cell.output_size]‘.
                    If time_major == True this must be a tensor of shape:
                        ‘[max_time batch_size cell.output_size]‘.
                In case of Bidirectional RNN this must be a tuple (outputs_fw outputs_bw) containing the forward and
                the backward RNN outputs ‘Tensor‘.
                    If time_major == False (default)
                        outputs_fw is a ‘Tensor‘ shaped:
                        ‘[batch_size max_time cell_fw.output_size]‘
                        and outputs_bw is a ‘Tensor‘ shaped:
                        ‘[batch_size max_time cell_bw.output_size]‘.
                    If time_major == True
                        outputs_fw is a ‘Tensor‘ shaped:
                        ‘[max_time batch_size cell_fw.output_size]‘
                        and outputs_bw is a ‘Tensor‘ shaped:
                        ‘[max_time batch_size cell_bw.output_size]‘.
        attention_size: Linear size of the Attention weights.
        time_major: The shape format of the ‘inputs‘ Tensors.
            If true these ‘Tensors‘ must be shaped ‘[max_time batch_size depth]‘.
            If false these ‘Tensors‘ must be shaped ‘[batch_size max_time depth]‘.
            Using ‘time_major = True‘ is a bit more efficient because it avoids
            transposes at the beginning and end of the RNN calculation.  However
            most TensorFlow data is batch-major so by default this function
            accepts input and emits output in batch-major form.
        return_alphas: Whether to return attention coefficients variable along with layer‘s output.
            Used for visualization purpose.
    Returns:
        The Attention output ‘Tensor‘.
        In case of RNN this will be a ‘Tensor‘ shaped:
            ‘[batch_size cell.output_size]‘.
        In case of Bidirectional RNN this will be a ‘Tensor‘ shaped:
            ‘[batch_size cell_fw.output_size + cell_bw.output_size]‘.
    “““

    if isinstance(inputs tuple):
        # In case of Bi-RNN concatenate the forward and the backward RNN outputs.
        inputs = tf.concat(inputs 2)

    if time_major:
        # (TBD) => (BTD)
        inputs = tf.array_ops.transpose(inputs 

 属性            大小     日期    时间   名称
----------- ---------  ---------- -----  ----

     文件       1073  2018-02-02 08:14  tf-rnn-attention-master\.gitignore

     文件       4102  2018-02-02 08:14  tf-rnn-attention-master\attention.py

     文件         67  2018-08-05 23:35  tf-rnn-attention-master\checkpoint

     文件       1068  2018-02-02 08:14  tf-rnn-attention-master\LICENSE

     文件    4648057  2018-08-03 16:33  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533283448.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 16:40  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533285656.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 16:42  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533285700.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 16:51  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533286315.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 17:45  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533289549.LAPTOP-0FHPGVM0

     文件     913672  2018-08-05 15:56  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533455785.LAPTOP-0FHPGVM0

     文件    4653017  2018-08-05 23:35  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533480582.LAPTOP-0FHPGVM0

     文件    4627897  2018-08-05 23:35  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533480592.LAPTOP-0FHPGVM0

     文件     913672  2018-08-08 21:57  tf-rnn-attention-master\logdir\test\events.out.tfevents.1533736622.LAPTOP-0FHPGVM0

     文件    4585769  2018-08-03 16:30  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533283447.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 16:40  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533285656.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 16:42  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533285700.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 16:51  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533286314.LAPTOP-0FHPGVM0

     文件     913672  2018-08-03 17:45  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533289549.LAPTOP-0FHPGVM0

     文件     913672  2018-08-05 15:56  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533455785.LAPTOP-0FHPGVM0

     文件    4606985  2018-08-05 23:31  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533480582.LAPTOP-0FHPGVM0

     文件    4589193  2018-08-05 23:30  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533480592.LAPTOP-0FHPGVM0

     文件    1447238  2018-08-08 21:59  tf-rnn-attention-master\logdir\train\events.out.tfevents.1533736622.LAPTOP-0FHPGVM0

     文件   14895620  2018-08-05 23:35  tf-rnn-attention-master\model.data-00000-of-00001

     文件       1688  2018-08-05 23:35  tf-rnn-attention-master\model.index

     文件     503631  2018-08-05 23:35  tf-rnn-attention-master\model.meta

     文件        601  2018-02-02 08:14  tf-rnn-attention-master\README.md

     文件       6211  2018-02-02 08:14  tf-rnn-attention-master\train.py

     文件       1196  2018-02-02 08:14  tf-rnn-attention-master\utils.py

     文件       4495  2018-08-05 15:56  tf-rnn-attention-master\visualization.html

     文件       1511  2018-02-02 08:14  tf-rnn-attention-master\visualize.py

............此处省略11个文件信息

评论

共有 条评论