forked from felixwqp/ProfPedia
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Rada Mihalcea_arxiv.txt
482 lines (482 loc) · 24.2 KB
/
Rada Mihalcea_arxiv.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<link href="http://arxiv.org/api/query?search_query%3Dall%3ARada%20AND%20all%3AMihalcea%26id_list%3D%26start%3D0%26max_results%3D50" rel="self" type="application/atom+xml"/>
<title type="html">ArXiv Query: search_query=all:Rada AND all:Mihalcea&id_list=&start=0&max_results=50</title>
<id>http://arxiv.org/api/HsS3KOcJUhrSeCaEuD3GyTYHqwU</id>
<updated>2019-04-13T00:00:00-04:00</updated>
<opensearch:totalResults xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">13</opensearch:totalResults>
<opensearch:startIndex xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">0</opensearch:startIndex>
<opensearch:itemsPerPage xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/">50</opensearch:itemsPerPage>
<entry>
<id>http://arxiv.org/abs/1311.2978v1</id>
<updated>2013-11-12T23:11:40Z</updated>
<published>2013-11-12T23:11:40Z</published>
<title>Authorship Attribution Using Word Network Features</title>
<summary> In this paper, we explore a set of novel features for authorship attribution
of documents. These features are derived from a word network representation of
natural language text. As has been noted in previous studies, natural language
tends to show complex network structure at word level, with low degrees of
separation and scale-free (power law) degree distribution. There has also been
work on authorship attribution that incorporates ideas from complex networks.
The goal of our paper is to explore properties of these complex networks that
are suitable as features for machine-learning-based authorship attribution of
documents. We performed experiments on three different datasets, and obtained
promising results.
</summary>
<author>
<name>Shibamouli Lahiri</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<link href="http://arxiv.org/abs/1311.2978v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1311.2978v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1612.06685v1</id>
<updated>2016-12-20T14:44:19Z</updated>
<published>2016-12-20T14:44:19Z</published>
<title>Stateology: State-Level Interactive Charting of Language, Feelings, and
Values</title>
<summary> People's personality and motivations are manifest in their everyday language
usage. With the emergence of social media, ample examples of such usage are
procurable. In this paper, we aim to analyze the vocabulary used by close to
200,000 Blogger users in the U.S. with the purpose of geographically portraying
various demographic, linguistic, and psychological dimensions at the state
level. We give a description of a web-based tool for viewing maps that depict
various characteristics of the social media users as derived from this large
blog dataset of over two billion words.
</summary>
<author>
<name>Konstantinos Pappas</name>
</author>
<author>
<name>Steven Wilson</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">5 pages, 5 figures</arxiv:comment>
<link href="http://arxiv.org/abs/1612.06685v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1612.06685v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1804.09692v1</id>
<updated>2018-04-25T17:40:20Z</updated>
<published>2018-04-25T17:40:20Z</published>
<title>Factors Influencing the Surprising Instability of Word Embeddings</title>
<summary> Despite the recent popularity of word embedding methods, there is only a
small body of work exploring the limitations of these representations. In this
paper, we consider one aspect of embedding spaces, namely their stability. We
show that even relatively high frequency words (100-200 occurrences) are often
unstable. We provide empirical evidence for how various factors contribute to
the stability of word embeddings, and we analyze the effects of stability on
downstream tasks.
</summary>
<author>
<name>Laura Wendlandt</name>
</author>
<author>
<name>Jonathan K. Kummerfeld</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">NAACL HLT 2018</arxiv:comment>
<link href="http://arxiv.org/abs/1804.09692v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1804.09692v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1809.08761v1</id>
<updated>2018-09-24T05:00:05Z</updated>
<published>2018-09-24T05:00:05Z</published>
<title>Speaker Naming in Movies</title>
<summary> We propose a new model for speaker naming in movies that leverages visual,
textual, and acoustic modalities in an unified optimization framework. To
evaluate the performance of our model, we introduce a new dataset consisting of
six episodes of the Big Bang Theory TV show and eighteen full movies covering
different genres. Our experiments show that our multimodal model significantly
outperforms several competitive baselines on the average weighted F-score
metric. To demonstrate the effectiveness of our framework, we design an
end-to-end memory network model that leverages our speaker naming model and
achieves state-of-the-art results on the subtitles task of the MovieQA 2017
Challenge.
</summary>
<author>
<name>Mahmoud Azab</name>
</author>
<author>
<name>Mingzhe Wang</name>
</author>
<author>
<name>Max Smith</name>
</author>
<author>
<name>Noriyuki Kojima</name>
</author>
<author>
<name>Jia Deng</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<link href="http://arxiv.org/abs/1809.08761v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1809.08761v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CV" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1612.08205v1</id>
<updated>2016-12-24T17:09:21Z</updated>
<published>2016-12-24T17:09:21Z</published>
<title>Predicting the Industry of Users on Social Media</title>
<summary> Automatic profiling of social media users is an important task for supporting
a multitude of downstream applications. While a number of studies have used
social media content to extract and study collective social attributes, there
is a lack of substantial research that addresses the detection of a user's
industry. We frame this task as classification using both feature engineering
and ensemble learning. Our industry-detection system uses both posted content
and profile information to detect a user's industry with 64.3% accuracy,
significantly outperforming the majority baseline in a taxonomy of fourteen
industry classes. Our qualitative analysis suggests that a person's industry
not only affects the words used and their perceived meanings, but also the
number and type of emotions being expressed.
</summary>
<author>
<name>Konstantinos Pappas</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">8 pages, 3 figures, 12 tables</arxiv:comment>
<link href="http://arxiv.org/abs/1612.08205v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1612.08205v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.SI" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1708.07104v1</id>
<updated>2017-08-23T17:12:03Z</updated>
<published>2017-08-23T17:12:03Z</published>
<title>Automatic Detection of Fake News</title>
<summary> The proliferation of misleading information in everyday access media outlets
such as social media feeds, news blogs, and online newspapers have made it
challenging to identify trustworthy news sources, thus increasing the need for
computational tools able to provide insights into the reliability of online
content. In this paper, we focus on the automatic identification of fake
content in online news. Our contribution is twofold. First, we introduce two
novel datasets for the task of fake news detection, covering seven different
news domains. We describe the collection, annotation, and validation process in
detail and present several exploratory analysis on the identification of
linguistic differences in fake and legitimate news content. Second, we conduct
a set of learning experiments to build accurate fake news detectors. In
addition, we provide comparative analyses of the automatic and manual
identification of fake news.
</summary>
<author>
<name>Verónica Pérez-Rosas</name>
</author>
<author>
<name>Bennett Kleinberg</name>
</author>
<author>
<name>Alexandra Lefevre</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<link href="http://arxiv.org/abs/1708.07104v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1708.07104v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1804.07835v2</id>
<updated>2018-10-31T18:53:28Z</updated>
<published>2018-04-20T21:40:28Z</published>
<title>Direct Network Transfer: Transfer Learning of Sentence Embeddings for
Semantic Similarity</title>
<summary> Sentence encoders, which produce sentence embeddings using neural networks,
are typically evaluated by how well they transfer to downstream tasks. This
includes semantic similarity, an important task in natural language
understanding. Although there has been much work dedicated to building sentence
encoders, the accompanying transfer learning techniques have received
relatively little attention. In this paper, we propose a transfer learning
setting specialized for semantic similarity, which we refer to as direct
network transfer. Through experiments on several standard text similarity
datasets, we show that applying direct network transfer to existing encoders
can lead to state-of-the-art performance. Additionally, we compare several
approaches to transfer sentence encoders to semantic similarity tasks, showing
that the choice of transfer learning setting greatly affects the performance in
many cases, and differs by encoder and dataset.
</summary>
<author>
<name>Li Zhang</name>
</author>
<author>
<name>Steven R. Wilson</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<link href="http://arxiv.org/abs/1804.07835v2" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1804.07835v2" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1805.06413v1</id>
<updated>2018-05-16T16:38:38Z</updated>
<published>2018-05-16T16:38:38Z</published>
<title>CASCADE: Contextual Sarcasm Detection in Online Discussion Forums</title>
<summary> The literature in automated sarcasm detection has mainly focused on lexical,
syntactic and semantic-level analysis of text. However, a sarcastic sentence
can be expressed with contextual presumptions, background and commonsense
knowledge. In this paper, we propose CASCADE (a ContextuAl SarCasm DEtector)
that adopts a hybrid approach of both content and context-driven modeling for
sarcasm detection in online social media discussions. For the latter, CASCADE
aims at extracting contextual information from the discourse of a discussion
thread. Also, since the sarcastic nature and form of expression can vary from
person to person, CASCADE utilizes user embeddings that encode stylometric and
personality features of the users. When used along with content-based feature
extractors such as Convolutional Neural Networks (CNNs), we see a significant
boost in the classification performance on a large Reddit corpus.
</summary>
<author>
<name>Devamanyu Hazarika</name>
</author>
<author>
<name>Soujanya Poria</name>
</author>
<author>
<name>Sruthi Gorantla</name>
</author>
<author>
<name>Erik Cambria</name>
</author>
<author>
<name>Roger Zimmermann</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">Accepted in COLING 2018</arxiv:comment>
<link href="http://arxiv.org/abs/1805.06413v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1805.06413v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1805.12501v2</id>
<updated>2019-04-10T04:07:32Z</updated>
<published>2018-05-31T14:54:33Z</published>
<title>Multi-Label Transfer Learning for Multi-Relational Semantic Similarity</title>
<summary> Multi-relational semantic similarity datasets define the semantic relations
between two short texts in multiple ways, e.g., similarity, relatedness, and so
on. Yet, all the systems to date designed to capture such relations target one
relation at a time. We propose a multi-label transfer learning approach based
on LSTM to make predictions for several relations simultaneously and aggregate
the losses to update the parameters. This multi-label regression approach
jointly learns the information provided by the multiple relations, rather than
treating them as separate tasks. Not only does this approach outperform the
single-task approach and the traditional multi-task learning approach, but it
also achieves state-of-the-art performance on all but one relation of the Human
Activity Phrase dataset.
</summary>
<author>
<name>Li Zhang</name>
</author>
<author>
<name>Steven R. Wilson</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">Accepted to *SEM 2019</arxiv:comment>
<link href="http://arxiv.org/abs/1805.12501v2" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1805.12501v2" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1811.00405v3</id>
<updated>2018-11-14T09:56:20Z</updated>
<published>2018-11-01T14:27:19Z</published>
<title>DialogueRNN: An Attentive RNN for Emotion Detection in Conversations</title>
<summary> Emotion detection in conversations is a necessary step for a number of
applications, including opinion mining over chat history, social media threads,
debates, argumentation mining, understanding consumer feedback in live
conversations, etc. Currently, systems do not treat the parties in the
conversation individually by adapting to the speaker of each utterance. In this
paper, we describe a new method based on recurrent neural networks that keeps
track of the individual party states throughout the conversation and uses this
information for emotion classification. Our model outperforms the state of the
art by a significant margin on two different datasets.
</summary>
<author>
<name>Navonil Majumder</name>
</author>
<author>
<name>Soujanya Poria</name>
</author>
<author>
<name>Devamanyu Hazarika</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<author>
<name>Alexander Gelbukh</name>
</author>
<author>
<name>Erik Cambria</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">AAAI 2019</arxiv:comment>
<link href="http://arxiv.org/abs/1811.00405v3" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1811.00405v3" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1811.07497v1</id>
<updated>2018-11-19T04:42:54Z</updated>
<published>2018-11-19T04:42:54Z</published>
<title>A Comparative Analysis of Content-based Geolocation in Blogs and Tweets</title>
<summary> The geolocation of online information is an essential component in any
geospatial application. While most of the previous work on geolocation has
focused on Twitter, in this paper we quantify and compare the performance of
text-based geolocation methods on social media data drawn from both Blogger and
Twitter. We introduce a novel set of location specific features that are both
highly informative and easily interpretable, and show that we can achieve error
rate reductions of up to 12.5% with respect to the best previously proposed
geolocation features. We also show that despite posting longer text, Blogger
users are significantly harder to geolocate than Twitter users. Additionally,
we investigate the effect of training and testing on different media
(cross-media predictions), or combining multiple social media sources
(multi-media predictions). Finally, we explore the geolocability of social
media in relation to three user dimensions: state, gender, and industry.
</summary>
<author>
<name>Konstantinos Pappas</name>
</author>
<author>
<name>Mahmoud Azab</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">31 pages, 6 figures, 8 tables</arxiv:comment>
<link href="http://arxiv.org/abs/1811.07497v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1811.07497v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="I.2.7" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1903.11672v1</id>
<updated>2019-03-27T19:49:00Z</updated>
<published>2019-03-27T19:49:00Z</published>
<title>MuSE-ing on the Impact of Utterance Ordering On Crowdsourced Emotion
Annotations</title>
<summary> Emotion recognition algorithms rely on data annotated with high quality
labels. However, emotion expression and perception are inherently subjective.
There is generally not a single annotation that can be unambiguously declared
"correct". As a result, annotations are colored by the manner in which they
were collected. In this paper, we conduct crowdsourcing experiments to
investigate this impact on both the annotations themselves and on the
performance of these algorithms. We focus on one critical question: the effect
of context. We present a new emotion dataset, Multimodal Stressed Emotion
(MuSE), and annotate the dataset using two conditions: randomized, in which
annotators are presented with clips in random order, and contextualized, in
which annotators are presented with clips in order. We find that contextual
labeling schemes result in annotations that are more similar to a speaker's own
self-reported labels and that labels generated from randomized schemes are most
easily predictable by automated systems.
</summary>
<author>
<name>Mimansa Jaiswal</name>
</author>
<author>
<name>Zakaria Aldeneh</name>
</author>
<author>
<name>Cristian-Paul Bara</name>
</author>
<author>
<name>Yuanhang Luo</name>
</author>
<author>
<name>Mihai Burzo</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<author>
<name>Emily Mower Provost</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">5 pages, ICASSP 2019</arxiv:comment>
<link href="http://arxiv.org/abs/1903.11672v1" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1903.11672v1" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.SD" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.SD" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.HC" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.LG" scheme="http://arxiv.org/schemas/atom"/>
<category term="eess.AS" scheme="http://arxiv.org/schemas/atom"/>
</entry>
<entry>
<id>http://arxiv.org/abs/1810.02508v4</id>
<updated>2018-10-23T09:51:03Z</updated>
<published>2018-10-05T03:50:24Z</published>
<title>MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in
Conversations</title>
<summary> Emotion recognition in conversations is a challenging Artificial Intelligence
(AI) task. Recently, it has gained popularity due to its potential applications
in many interesting AI tasks such as empathetic dialogue generation, user
behavior understanding, and so on. To the best of our knowledge, there is no
multimodal multi-party conversational dataset available, which contains more
than two speakers in a dialogue. In this work, we propose the Multimodal
EmotionLines Dataset (MELD), which we created by enhancing and extending the
previously introduced EmotionLines dataset. MELD contains 13,708 utterances
from 1433 dialogues of Friends TV series. MELD is superior to other
conversational emotion recognition datasets SEMAINE and IEMOCAP as it consists
of multiparty conversations and number of utterances in MELD is almost twice as
these two datasets. Every utterance in MELD is associated with an emotion and a
sentiment label. Utterances in MELD are multimodal encompassing audio and
visual modalities along with the text. We have also addressed several
shortcomings in EmotionLines and proposed a strong multimodal baseline. The
baseline results show that both contextual and multimodal information play
important role in emotion recognition in conversations.
</summary>
<author>
<name>Soujanya Poria</name>
</author>
<author>
<name>Devamanyu Hazarika</name>
</author>
<author>
<name>Navonil Majumder</name>
</author>
<author>
<name>Gautam Naik</name>
</author>
<author>
<name>Erik Cambria</name>
</author>
<author>
<name>Rada Mihalcea</name>
</author>
<arxiv:comment xmlns:arxiv="http://arxiv.org/schemas/atom">https://affective-meld.github.io</arxiv:comment>
<link href="http://arxiv.org/abs/1810.02508v4" rel="alternate" type="text/html"/>
<link title="pdf" href="http://arxiv.org/pdf/1810.02508v4" rel="related" type="application/pdf"/>
<arxiv:primary_category xmlns:arxiv="http://arxiv.org/schemas/atom" term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
<category term="cs.CL" scheme="http://arxiv.org/schemas/atom"/>
</entry>
</feed>