我了解词嵌入和 word2vec。
本文:https ://arxiv.org/pdf/1603.01547.pdf
他们说的是一种新型的词嵌入。
Our model uses one word embedding function and two encoder functions. The word embedding function e translates words into vector representations. The first encoder function is a document encoder f that encodes *every word from the document* d *in the context of the whole document*. We call this the **contextual embedding**.
这是一些新的编码方式,我该如何实现?谢谢 。