Skip to content

Commit

Permalink
Renaming the model page and removing model references
Browse files Browse the repository at this point in the history
  • Loading branch information
caitlinwheeless committed Jun 24, 2024
1 parent 539ce61 commit 2e2b5d2
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 14 deletions.
8 changes: 6 additions & 2 deletions docs/source/guide/project_settings_lse.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ Configure additional settings for annotators.

<dd>

If you have an ML backend or model connected, you can use this setting to determine whether tasks should be pre-labeled using predictions from the model. For more information, see [Integrate Label Studio into your machine learning pipeline](ml).
If you have an ML backend or model connected, or if you're using [Prompts](prompts_overview) to generate predictions, you can use this setting to determine whether tasks should be pre-labeled using predictions. For more information, see [Integrate Label Studio into your machine learning pipeline](ml) and [Generate predictions from a prompt](prompts_predictions).

Use the drop-down menu to select the predictions source. For example, you can select a [connected model](#Model) or a set of [predictions](#Predictions).

Expand Down Expand Up @@ -479,7 +479,11 @@ And the following actions are available from the overflow menu next to a connect

## Predictions

From here you can view predictions that have been imported or generated when executing the **Batch Predictions** action from the Data Manager. For more information on using predictions, see [Import pre-annotated data into Label Studio](predictions).
From here you can view predictions that have been imported, generated with [Prompts](prompts_predictions), or generated when executing the **Batch Predictions** action from the Data Manager. For more information on using predictions, see [Import pre-annotated data into Label Studio](predictions).

To remove predictions from the project, click the overflow menu next to the predictions set and select **Delete**.

To determine which predictions are show to annotators, use the [**Annotation > Live Predictions** section](#Annotation).

## Cloud storage

Expand Down
File renamed without changes.
14 changes: 8 additions & 6 deletions docs/source/guide/prompts_draft.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
---
title: Draft a prompt
short: Draft a prompt
title: Draft and run prompts
short: Draft and run prompts
tier: enterprise
type: guide
order: 0
order_enterprise: 231
meta_title: Draft a prompt
meta_title: Draft your Prompt
meta_description: Create and evaluate an LLM prompt
section: Prompts
date: 2024-06-12 14:09:09
---

With your [Prompts model created](prompts_model), you can begin drafting prompts to generate predictions or .
With your [Prompt created](prompts_create), you can begin drafting your prompt content to run against baseline tasks.

## Draft a prompt and generate predictions

Expand All @@ -33,8 +33,10 @@ With your [Prompts model created](prompts_model), you can begin drafting prompts
4. Click **Save**.
5. Click **Evaluate**.

!!! note
When you click **Evaluate**, you will create predictions for each task in the baseline you selected task. When you return to the project, you will see this reflected in your tasks. You can see how many predictions a task has using the **Predictions** column in the Data Manager.
!!! warning
When you click **Evaluate**, you will create predictions for each task in the baseline you selected and overwrite any previous predictions you generated with this prompt.

Evaluating your Prompts can result in multiple predictions on your tasks: if you have multiple Prompts for one Project, or if you click both **Evaluate** and **Get Predictions for All Tasks from a Prompt**, you will see multiple predictions for tasks in the Data Manager.

<br><br>
<video src="../images/prompts/prompts.mp4" controls="controls" style="max-width: 800px;" class="gif-border" />
Expand Down
6 changes: 3 additions & 3 deletions docs/source/guide/prompts_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ By utilizing AI to handle the bulk of the annotation work, you can significantly
* [Blog - What's a ground truth dataset?](https://humansignal.com/blog/what-s-a-ground-truth-dataset/)
3. Go to the Prompts page and create a new model. If you haven't already, you will also need to add an OpenAI API key.

* [Create a model](prompts_model)
* [Create a Prompt](prompts_create)
* [Where do I find my OpenAI API Key?](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
4. Write a prompt and evaluate it against your ground truth dataset.

Expand Down Expand Up @@ -80,7 +80,7 @@ Additionally, this workflow provides a scalable solution for continuously expand
* [Sync data from external storage](storage)
2. Go to the Prompts page and create a new model. If you haven't already, you will also need to add an OpenAI API key.

* [Create a model](prompts_model)
* [Create a Prompt](prompts_create)
* [Where do I find my OpenAI API Key?](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
3. Write a prompt and run it against your task samples.

Expand Down Expand Up @@ -116,7 +116,7 @@ This feedback loop allows you to iteratively fine-tune your prompts, optimizing
* [Blog - What's a ground truth dataset?](https://humansignal.com/blog/what-s-a-ground-truth-dataset/)
3. Go to the Prompts page and create a new model. If you haven't already, you will also need to add an OpenAI API key.

* [Create a model](prompts_model)
* [Create a Prompt](prompts_create)
* [Where do I find my OpenAI API Key?](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
4. Write a prompt and evaluate it against your ground truth dataset.

Expand Down
8 changes: 5 additions & 3 deletions docs/source/guide/prompts_predictions.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,17 @@ Once complete, you can return to the project and open the Data Manager. Use the

## Remove predictions

If you prematurely generated predictions or want to use a new prompt, simply select all tasks and select **Actions > Delete Predictions**. To only remove predictions from certain models or model versions, use the **Predictions** page in the project settings.
If you prematurely generated predictions or want to use a new prompt, simply select all tasks and select **Actions > Delete Predictions**. To only remove predictions from certain models or model versions, use [the **Predictions** page in the project settings](project_settings_lse#Predictions).

## Create annotations from predictions

Once you have your predictions in place, you still need to convert them to annotations. You can review predictions by opening tasks. The predictions are listed under the model name and are grayed out:
Once you have your predictions in place, you might still want to convert them to annotations (depending on your workflow and your desired outcome).

You can review predictions by opening tasks. The predictions are listed under the model name and are grayed out:

![Screenshot of the prediction preview](/images/prompts/prediction.png)


From the Data Manager, select all the tasks you want to label and then select **Actions > Create Annotations from Predictions**. You are asked to select the model and version you want to use.

![Gif of the of create annotations action](/images/prompts/create_annotations_1.png)
![Gif of the of create annotations action](/images/prompts/create_annotations_1.gif)

0 comments on commit 2e2b5d2

Please sign in to comment.