diff --git a/docs/source/guide/project_settings_lse.md b/docs/source/guide/project_settings_lse.md
index 717d0c7732dd..34465e890488 100644
--- a/docs/source/guide/project_settings_lse.md
+++ b/docs/source/guide/project_settings_lse.md
@@ -157,7 +157,7 @@ Configure additional settings for annotators.
-If you have an ML backend or model connected, you can use this setting to determine whether tasks should be pre-labeled using predictions from the model. For more information, see [Integrate Label Studio into your machine learning pipeline](ml).
+If you have an ML backend or model connected, or if you're using [Prompts](prompts_overview) to generate predictions, you can use this setting to determine whether tasks should be pre-labeled using predictions. For more information, see [Integrate Label Studio into your machine learning pipeline](ml) and [Generate predictions from a prompt](prompts_predictions).
Use the drop-down menu to select the predictions source. For example, you can select a [connected model](#Model) or a set of [predictions](#Predictions).
@@ -479,7 +479,11 @@ And the following actions are available from the overflow menu next to a connect
## Predictions
-From here you can view predictions that have been imported or generated when executing the **Batch Predictions** action from the Data Manager. For more information on using predictions, see [Import pre-annotated data into Label Studio](predictions).
+From here you can view predictions that have been imported, generated with [Prompts](prompts_predictions), or generated when executing the **Batch Predictions** action from the Data Manager. For more information on using predictions, see [Import pre-annotated data into Label Studio](predictions).
+
+To remove predictions from the project, click the overflow menu next to the predictions set and select **Delete**.
+
+To determine which predictions are show to annotators, use the [**Annotation > Live Predictions** section](#Annotation).
## Cloud storage
diff --git a/docs/source/guide/prompts_model.md b/docs/source/guide/prompts_create.md
similarity index 100%
rename from docs/source/guide/prompts_model.md
rename to docs/source/guide/prompts_create.md
diff --git a/docs/source/guide/prompts_draft.md b/docs/source/guide/prompts_draft.md
index 5a7430375eb1..bcbeb211f7d3 100644
--- a/docs/source/guide/prompts_draft.md
+++ b/docs/source/guide/prompts_draft.md
@@ -1,17 +1,17 @@
---
-title: Draft a prompt
-short: Draft a prompt
+title: Draft and run prompts
+short: Draft and run prompts
tier: enterprise
type: guide
order: 0
order_enterprise: 231
-meta_title: Draft a prompt
+meta_title: Draft your Prompt
meta_description: Create and evaluate an LLM prompt
section: Prompts
date: 2024-06-12 14:09:09
---
-With your [Prompts model created](prompts_model), you can begin drafting prompts to generate predictions or .
+With your [Prompt created](prompts_create), you can begin drafting your prompt content to run against baseline tasks.
## Draft a prompt and generate predictions
@@ -33,8 +33,10 @@ With your [Prompts model created](prompts_model), you can begin drafting prompts
4. Click **Save**.
5. Click **Evaluate**.
-!!! note
- When you click **Evaluate**, you will create predictions for each task in the baseline you selected task. When you return to the project, you will see this reflected in your tasks. You can see how many predictions a task has using the **Predictions** column in the Data Manager.
+!!! warning
+ When you click **Evaluate**, you will create predictions for each task in the baseline you selected and overwrite any previous predictions you generated with this prompt.
+
+ Evaluating your Prompts can result in multiple predictions on your tasks: if you have multiple Prompts for one Project, or if you click both **Evaluate** and **Get Predictions for All Tasks from a Prompt**, you will see multiple predictions for tasks in the Data Manager.
diff --git a/docs/source/guide/prompts_overview.md b/docs/source/guide/prompts_overview.md
index 4946f45dcd00..4fbeb8e7d51f 100644
--- a/docs/source/guide/prompts_overview.md
+++ b/docs/source/guide/prompts_overview.md
@@ -47,7 +47,7 @@ By utilizing AI to handle the bulk of the annotation work, you can significantly
* [Blog - What's a ground truth dataset?](https://humansignal.com/blog/what-s-a-ground-truth-dataset/)
3. Go to the Prompts page and create a new model. If you haven't already, you will also need to add an OpenAI API key.
- * [Create a model](prompts_model)
+ * [Create a Prompt](prompts_create)
* [Where do I find my OpenAI API Key?](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
4. Write a prompt and evaluate it against your ground truth dataset.
@@ -80,7 +80,7 @@ Additionally, this workflow provides a scalable solution for continuously expand
* [Sync data from external storage](storage)
2. Go to the Prompts page and create a new model. If you haven't already, you will also need to add an OpenAI API key.
- * [Create a model](prompts_model)
+ * [Create a Prompt](prompts_create)
* [Where do I find my OpenAI API Key?](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
3. Write a prompt and run it against your task samples.
@@ -116,7 +116,7 @@ This feedback loop allows you to iteratively fine-tune your prompts, optimizing
* [Blog - What's a ground truth dataset?](https://humansignal.com/blog/what-s-a-ground-truth-dataset/)
3. Go to the Prompts page and create a new model. If you haven't already, you will also need to add an OpenAI API key.
- * [Create a model](prompts_model)
+ * [Create a Prompt](prompts_create)
* [Where do I find my OpenAI API Key?](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
4. Write a prompt and evaluate it against your ground truth dataset.
diff --git a/docs/source/guide/prompts_predictions.md b/docs/source/guide/prompts_predictions.md
index a56de76e02bc..7c0361ca74ae 100644
--- a/docs/source/guide/prompts_predictions.md
+++ b/docs/source/guide/prompts_predictions.md
@@ -32,15 +32,17 @@ Once complete, you can return to the project and open the Data Manager. Use the
## Remove predictions
-If you prematurely generated predictions or want to use a new prompt, simply select all tasks and select **Actions > Delete Predictions**. To only remove predictions from certain models or model versions, use the **Predictions** page in the project settings.
+If you prematurely generated predictions or want to use a new prompt, simply select all tasks and select **Actions > Delete Predictions**. To only remove predictions from certain models or model versions, use [the **Predictions** page in the project settings](project_settings_lse#Predictions).
## Create annotations from predictions
-Once you have your predictions in place, you still need to convert them to annotations. You can review predictions by opening tasks. The predictions are listed under the model name and are grayed out:
+Once you have your predictions in place, you might still want to convert them to annotations (depending on your workflow and your desired outcome).
+
+You can review predictions by opening tasks. The predictions are listed under the model name and are grayed out:
![Screenshot of the prediction preview](/images/prompts/prediction.png)
From the Data Manager, select all the tasks you want to label and then select **Actions > Create Annotations from Predictions**. You are asked to select the model and version you want to use.
-![Gif of the of create annotations action](/images/prompts/create_annotations_1.png)
\ No newline at end of file
+![Gif of the of create annotations action](/images/prompts/create_annotations_1.gif)
\ No newline at end of file