Skip to content

Latest commit

 

History

History
124 lines (98 loc) · 6.52 KB

report.org

File metadata and controls

124 lines (98 loc) · 6.52 KB

Neural Constituency Parsing

We suggest 2 ideas to perform constituency parsing: a score based parser and a transformer based architecture.

Score Based

We attempt to learn a score function that induces a recursive split in a given sentence into the phrase structure.

Our score function is dependent on the sentence, spans in the sentence (indices) and the rules. A span $s$ is a 4-tuple $(i,j,k,l)$, where $(i,j)$ and $(k,l)$ defines the phrases and a rule is an encoded phrase grammar rule derived from Penn Treebank.

Note: We assume that each phrase splits into atmost 2 phrases. Thus, we use the Chomsky Normal Form of the phrase structure trees in Penn Treebank.

\[score(w,s,r; W_1, W_2) = g(W_1 w[s])^T W_2 r\] where, $w$ is a candidate sentence, $s$ is a span of the form above, $w[s]$ is the sentence representation for the given span $s$,$r$ is the encoded rule and $g$ is an element-wise non-linearity (ReLU)

$w[s]$ and $r$ are vectors of dimensions $d_w$ and $d_r$, thus making $W_1$ a matrix of weights of dimension $d × d_w$ and $W_2$ a matrix of weights of dimension $d × d_r$.

Encodings

The sentence representation $w[(i,j,k,l)]$ is a vector that is the concatenation of the ELMo span representations for $w[i:j]$ and $w[k:l]$. \[w[i:j] = [\overrightarrow{h_j} - \overrightarrow{hi-1} ; \overleftarrow{hj+1} - \overleftarrow{h_i}]\]

For all labels (non-terminals) in the dataset, we one-hot encode each label as a vector. For a production rule, $r$ we encode the rule as the sum of the encoded vector for each of the non-terminals in the rule. To ensure we can compute the rule given the encoding, we encode the position of the non-terminal in the rule as well.

For example, for a rule $r: A → B C$ we encode the rule as follows: \[encode(r) = a × 1h(A) + b × 1h(B) + c × 1h(C)\] where, $1h(⋅)$ is the one-hot encoded vector for each non-terminal.

For a given encoded vector $r$, the corresponding rule is of one of the following forms

  • $A → t$

    $t$ is a terminal and thus $r = (0 \ldots a \ldots 0)^T$

  • $A → A$

    $r = (0 \ldots a+b \ldots 0)^T$

  • $A → B$

    $r = (0 \ldots a \ldots b \ldots 0)^T$ or $r = (0 \ldots b \ldots a \ldots 0)^T$

  • $A → A A$

    $r = (0 \ldots a+b+c \ldots 0)^T$

  • $A → B B$

    $r = (0 \ldots a \ldots b+c \ldots 0)^T$ or $r = (0 \ldots b+c \ldots a \ldots 0)^T$

  • $A → B C$

    $r = (0 \ldots a \ldots b \ldots c \ldots 0)^T$ and all possible permutations of positions for $a$, $b$ and $c$

We choose the values of $a,b,c$ such that there are no ambiguities in the rule encodings above. The values we choose for $a,b,c$ are $3^0, 3^1$ and $3^9$.

Training

For a given sentence $w$, the parse tree $T$ is conditioned on the sentence as follows: \[P(T|w;W_1,W_2) \propto e{\displaystyle ∑_{(r,s) ∈ Tscore(w,s,r;W_1,W_2)}}\]

Thus, we can learn the weights using the log-likelihood of the conditional distribution \[L(W_1,W_2) = ∑d=1^D ∑(r,s) ∈ T_d score(w_d,s,r;W_1,W_2)\] where, $D$ is the number of samples in the training dataset and $T_d, w_d$ are sample ground truths.

We model the weight matrices $W_1$ and $W_2$ as multilayer perceptrons. We use backpropagation to compute the loss differentials and gradient descent to update the perceptron weights.

Inference

Given a new sentence, we compute the parse tree as follows:

At each step, we compute the ideal span and rule as follows: \[s*, r* = arg max score(w,s,r;W_1,W_2)\]

To compute the above arg max we use dynamic programming to compute the best scores to infer the tree.

  • To start with we define $sbest(i,i+1,j,j+1)$ for all $i < j - 1$ as \[sbest(i,i+1,j,j+1) = max_r s(w,(i,i+1,j,j+1), r; W_1,W_2)\] The rule to choose is the argument $r$ for which the above is maximised.
  • With our DP we wish to compute the function $sbest(0,n-3,n-2,n-1,r)$ where $n$ is the size of the input sentence.
  • We reduce the above problem into smaller problems by allowing the indices $(i,j,k,l)$ to move right, left, left and right respectively. Thus, there are $2^4 = 16$ possible sub-problems to maximise over. Let the set of $sbest$ scores for all these smaller problems be $Sbest$
  • The recursion to express this dynamic programming algorithm is: \[sbest(i,j,k,l) = max_r \{ s(w, (i,j,k,l),r), max \{Sbest\}\}\]

We build a bottom up model and using the best score to split the sentence into a tree.

The code and the results for this model are at this repository.

Tree Transformer

We also have another idea (which we could not implement/train) but wish to present nevertheless.

We extend the architecture of the Tree Transformer proposed by Yau-Shian Wang et al. which introduces the notion of an additional “Constituent Attention” module that implements self-attention between two adjacent words to induce tree structures on sequential language data. We define an additional label prior on top of the constituent prior defined by the paper.

Constituent Attention

The constituent attention module lets each word attend to the neighboring words to define the “score” or “probability” of forming a constituent with those words. The information missing here is the label information, which we induce using a label attention module.

Label Prior

Consider, a query vector for each label $l$ as $q_l$. Let $K_l, V_l$ be the label-specific key and value matrices for this module. We compute a label attention as follows: \[al = softmax \left( \frac{q_l ⋅ K_l}{d} \right)\] We use this attention vector to compute a label-context vector $c_l$ \[c_l = a_l o V_l\] Each head in the multi-head label attention is used to attend to each label and the resultant context vectors are concatenated and added to the input embedding of the subsequent self-attention module with the embedded constituent prior.

Training and Parsing

We train this model using the same training objective of the Tree-Transformer and perform parsing as described in the paper. Additionally to infer the label, we use the span representation $sij$ for positions $i,j$ inferred from the unsupervised parsing from the paper and compute the contribution of each label vector to the span (dot-product). We normalise and average to get individual label scores for each span and assign the label with the highest score.