diff --git a/docs/source/guide/project_settings_lse.md b/docs/source/guide/project_settings_lse.md index 717d0c7732dd..34465e890488 100644 --- a/docs/source/guide/project_settings_lse.md +++ b/docs/source/guide/project_settings_lse.md @@ -157,7 +157,7 @@ Configure additional settings for annotators.
-If you have an ML backend or model connected, you can use this setting to determine whether tasks should be pre-labeled using predictions from the model. For more information, see [Integrate Label Studio into your machine learning pipeline](ml). +If you have an ML backend or model connected, or if you're using [Prompts](prompts_overview) to generate predictions, you can use this setting to determine whether tasks should be pre-labeled using predictions. For more information, see [Integrate Label Studio into your machine learning pipeline](ml) and [Generate predictions from a prompt](prompts_predictions). Use the drop-down menu to select the predictions source. For example, you can select a [connected model](#Model) or a set of [predictions](#Predictions). @@ -479,7 +479,11 @@ And the following actions are available from the overflow menu next to a connect ## Predictions -From here you can view predictions that have been imported or generated when executing the **Batch Predictions** action from the Data Manager. For more information on using predictions, see [Import pre-annotated data into Label Studio](predictions). +From here you can view predictions that have been imported, generated with [Prompts](prompts_predictions), or generated when executing the **Batch Predictions** action from the Data Manager. For more information on using predictions, see [Import pre-annotated data into Label Studio](predictions). + +To remove predictions from the project, click the overflow menu next to the predictions set and select **Delete**. + +To determine which predictions are show to annotators, use the [**Annotation > Live Predictions** section](#Annotation). ## Cloud storage diff --git a/docs/source/guide/prompts_model.md b/docs/source/guide/prompts_create.md similarity index 100% rename from docs/source/guide/prompts_model.md rename to docs/source/guide/prompts_create.md diff --git a/docs/source/guide/prompts_draft.md b/docs/source/guide/prompts_draft.md index 5a7430375eb1..bcbeb211f7d3 100644 --- a/docs/source/guide/prompts_draft.md +++ b/docs/source/guide/prompts_draft.md @@ -1,17 +1,17 @@ --- -title: Draft a prompt -short: Draft a prompt +title: Draft and run prompts +short: Draft and run prompts tier: enterprise type: guide order: 0 order_enterprise: 231 -meta_title: Draft a prompt +meta_title: Draft your Prompt meta_description: Create and evaluate an LLM prompt section: Prompts date: 2024-06-12 14:09:09 --- -With your [Prompts model created](prompts_model), you can begin drafting prompts to generate predictions or . +With your [Prompt created](prompts_create), you can begin drafting your prompt content to run against baseline tasks. ## Draft a prompt and generate predictions @@ -33,8 +33,10 @@ With your [Prompts model created](prompts_model), you can begin drafting prompts 4. Click **Save**. 5. Click **Evaluate**. -!!! note - When you click **Evaluate**, you will create predictions for each task in the baseline you selected task. When you return to the project, you will see this reflected in your tasks. You can see how many predictions a task has using the **Predictions** column in the Data Manager. +!!! warning + When you click **Evaluate**, you will create predictions for each task in the baseline you selected and overwrite any previous predictions you generated with this prompt. + + Evaluating your Prompts can result in multiple predictions on your tasks: if you have multiple Prompts for one Project, or if you click both **Evaluate** and **Get Predictions for All Tasks from a Prompt**, you will see multiple predictions for tasks in the Data Manager.