Skip to content

Commit

Permalink
Add seminar
Browse files Browse the repository at this point in the history
  • Loading branch information
dspinellis committed Dec 18, 2024
1 parent 3a7bfd8 commit 636ba74
Showing 1 changed file with 9 additions and 0 deletions.
9 changes: 9 additions & 0 deletions content/seminars/2025-01-13.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
title: Prompt Stability Scoring for Text Annotation with Large Language Models
date: 2025-01-13
presenter: Christopher Barrie, NYU
category: seminars
time: 17:30

Researchers are increasingly using language models (LMs) for text annotation. These approaches rely only on a prompt telling the model to return a given output according to a set of instructions. The reproducibility of LM outputs may nonetheless be vulnerable to small changes in the prompt design. This calls into question the replicability of classification routines. To tackle this problem, researchers have typically tested a variety of semantically similar prompts to determine what we call "prompt stability." These approaches remain ad-hoc and task specific. In this article, we propose a general framework for diagnosing prompt stability by adapting traditional approaches to intra- and inter-coder reliability scoring. We call the resulting metric the Prompt Stability Score (PSS) and provide a Python package PromptStability for its estimation. Using six different datasets and twelve outcomes, we classify >150k rows of data to: a) diagnose when prompt stability is low; and b) demonstrate the functionality of the package. We conclude by providing best practice recommendations for applied researchers.

Bio: Christopher Barrie is Assistant Professor of Sociology at NYU. He is also Core Faculty at CSMaP and Research Fellow at the Department of Sociology, University of Oxford.

0 comments on commit 636ba74

Please sign in to comment.