The files in this directory belong to the paper:
by Enrica Troiano, Laura Oberländer and Roman Klinger
This repository contains the code to reproduce and use our modeling experiments. In the following figure coming from our paper (as Figure 10) you can see our main experimental framework.
- a box in the depiction corresponds to a model (or a group of models for the case of the human evaluation).
- the head indicates data that directly stems from the generator of a textual instance.
- the lines correspond to the flow of information used by the box connected with an arrowhead.
The left-most model (1) represents the classification performance of the validators (readers) of crowd-enVENT. This shows how well people performed in the task undertaken by our computational models (2) to (7).
Each experiment or group of experiments is placed in a subdirectory of the repository. Each subdirectory usually contains the following files and folders:
Makefile
: A Makefile to reproduce the workscripts
: Python codesources
: a symlink to the toplevel sources folder, in which you need to place our corpusworkdata
: initially empty, this is where the experiment will store intermediary output filesoutputs
: final output files (e.g. predictions, tables, etc.) of the experimentspredictions
: Predictions produced by the original modelsplots
,tables
README.md
: specific information on how to run this experimentrequirements.txt
: Python dependencies
- Set up a virtual environment:
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- Download and prepare our corpus by calling
make
in the top-level folder.
["joy", "sadness", "surprise", "anger", "fear", "disgust", "relief", "guilt", "shame", "trust", "pride", "boredom", "no-emotion"]
["suddenness", "familiarity", "predict_event", "pleasantness", "unpleasantness", "goal_relevance", "chance_responsblt" "self_responsblt", "other_responsblt", "predict_conseq", "goal_support", "urgency", "self_control", "other_control", "chance_control", "accept_conseq", "standards", "social_norms", "attention", "not_consider", "effort"]
cd human_evaluation
- read the corresponding
README
and theMakefile
cd emotions-and-appraisals-from-text
- read the corresponding
README
and theMakefile
- edit the
schedule.sh
and theevaluate.sh
to include theclassification
andregression
settings.
cd emotions-and-appraisals-from-text-jiant
- read the corresponding
README
and theMakefile
.
cd emotion-from-appraisal-values
- read the
README
and follow the instructions there.
cd emotions-and-appraisals-from-text
- read the corresponding
README
and theMakefile
. - edit the
schedule.sh
and theevaluate.sh
script to include only thecombined
setting.
Please cite this paper as:
@article{Troiano2023,
author = {Enrica Troiano and Laura Oberl\"ander and Roman Klinger},
title = {Dimensional Modeling of Emotions in Text with
Appraisal Theories: Corpus Creation, Annotation
Reliability, and Prediction},
journal = {Computational Linguistics},
number = 1,
volume = 49,
month = mar,
year = 2023,
address = {Cambridge, MA},
publisher = {MIT Press},
doi = {10.1162/coli_a_00461},
url = {https://doi.org/10.1162/coli_a_00461},
}
For any questions regarding the paper don't hesitate contacting us at:
Do you have a question regarding the contents of this repository? Please create an issue and we will reach out as soon as possible.