[BioNLP] New paper on trees versus surface sequences, Jurafsky, et al

Bob Futrelle bob.futrelle at gmail.com
Mon Mar 9 08:46:19 PDT 2015


When Are Tree Structures Necessary for Deep Learning of Representations?
Jiwei Li <http://arxiv.org/find/cs/1/au:+Li_J/0/1/0/all/0/1>, Dan Jurafsky
<http://arxiv.org/find/cs/1/au:+Jurafsky_D/0/1/0/all/0/1>, Eudard Hovy
<http://arxiv.org/find/cs/1/au:+Hovy_E/0/1/0/all/0/1>
(Submitted on 28 Feb 2015 (v1 <http://arxiv.org/abs/1503.00185v1>), last
revised 6 Mar 2015 (this version, v2))

Recursive neural models, which use syntactic parse trees to recursively
generate representations bottom-up from parse children, are a popular new
architecture, promising to capture structural properties like the scope of
negation or long-distance semantic dependencies. But understanding exactly
which tasks this parse-based method is appropriate for remains an open
question. In this paper we benchmark recursive neural models against
sequential recurrent neural models, which are structured solely on word
sequences. We investigate 5 tasks: sentiment classification on (1)
sentences and (2) syntactic phrases; (3) question answering; (4) discourse
parsing; (5) semantic relations (e.g., component-whole between nouns); We
find that recurrent models have equal or superior performance to recursive
models on all tasks except one: semantic relations between nominals. Our
analysis suggests that tasks relying on the scope of negation (like
sentiment) are well-handled by sequential models. Recursive models help
only with tasks that require representing long-distance relations between
words. Our results offer insights on the design of neural architectures for
representation learning.

http://arxiv.org/abs/1503.00185

 - Bob Futrelle
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.bionlp.org/pipermail/bionlp_bionlp.org/attachments/20150309/80c39951/attachment.html>


More information about the BioNLP mailing list