Skip to content

Commit

Permalink
some typo fixes; codellama 70; tokens generated; colab link
Browse files Browse the repository at this point in the history
  • Loading branch information
jeffxtang committed May 3, 2024
1 parent aab327c commit 23afbd4
Showing 1 changed file with 12 additions and 7 deletions.
19 changes: 12 additions & 7 deletions recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
"\n",
"# Prompt Engineering with Llama 3\n",
"\n",
"Prompt engineering is using natural language to produce a desired response from a large language model (LLM).\n",
Expand Down Expand Up @@ -45,7 +47,7 @@
"\n",
"#### Llama 3\n",
"1. `llama-3-8b` - base pretrained 8 billion parameter model\n",
"1. `llama-3-70b` - base pretrained 8 billion parameter model\n",
"1. `llama-3-70b` - base pretrained 70 billion parameter model\n",
"1. `llama-3-8b-instruct` - instruction fine-tuned 8 billion parameter model\n",
"1. `llama-3-70b-instruct` - instruction fine-tuned 70 billion parameter model (flagship)\n",
"\n",
Expand Down Expand Up @@ -75,12 +77,15 @@
"1. `codellama-7b` - code fine-tuned 7 billion parameter model\n",
"1. `codellama-13b` - code fine-tuned 13 billion parameter model\n",
"1. `codellama-34b` - code fine-tuned 34 billion parameter model\n",
"1. `codellama-70b` - code fine-tuned 70 billion parameter model\n",
"1. `codellama-7b-instruct` - code & instruct fine-tuned 7 billion parameter model\n",
"2. `codellama-13b-instruct` - code & instruct fine-tuned 13 billion parameter model\n",
"3. `codellama-34b-instruct` - code & instruct fine-tuned 34 billion parameter model\n",
"3. `codellama-70b-instruct` - code & instruct fine-tuned 70 billion parameter model\n",
"1. `codellama-7b-python` - Python fine-tuned 7 billion parameter model\n",
"2. `codellama-13b-python` - Python fine-tuned 13 billion parameter model\n",
"3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model"
"3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model\n",
"3. `codellama-70b-python` - Python fine-tuned 70 billion parameter model"
]
},
{
Expand Down Expand Up @@ -124,11 +129,11 @@
"\n",
"> Our destiny is written in the stars.\n",
"\n",
"...is tokenized into `[\"Our\", \"destiny\", \"is\", \"written\", \"in\", \"the\", \"stars\", \".\"]` for Llama 3.\n",
"...is tokenized into `[\"Our\", \" destiny\", \" is\", \" written\", \" in\", \" the\", \" stars\", \".\"]` for Llama 3. See [this](https://tiktokenizer.vercel.app/?model=meta-llama%2FMeta-Llama-3-8B) for an interactive tokenizer tool.\n",
"\n",
"Tokens matter most when you consider API pricing and internal behavior (ex. hyperparameters).\n",
"\n",
"Each model has a maximum context length that your prompt cannot exceed. That's 8K tokens for Llama 3 and 100K for Code Llama. \n"
"Each model has a maximum context length that your prompt cannot exceed. That's 8K tokens for Llama 3, 4K for Llama 2, and 100K for Code Llama. \n"
]
},
{
Expand Down Expand Up @@ -164,7 +169,7 @@
"from groq import Groq\n",
"\n",
"# Get a free API key from https://console.groq.com/keys\n",
"# os.environ[\"GROQ_API_KEY\"] = \"YOUR_KEY_HERE\"\n",
"os.environ[\"GROQ_API_KEY\"] = \"YOUR_GROQ_API_KEY\"\n",
"\n",
"LLAMA3_70B_INSTRUCT = \"llama3-70b-8192\"\n",
"LLAMA3_8B_INSTRUCT = \"llama3-8b-8192\"\n",
Expand Down Expand Up @@ -699,7 +704,7 @@
"source": [
"### Limiting Extraneous Tokens\n",
"\n",
"A common struggle is getting output without extraneous tokens (ex. \"Sure! Here's more information on...\").\n",
"A common struggle with Llama 2 is getting output without extraneous tokens (ex. \"Sure! Here's more information on...\"), even if explicit instructions are given to Llama 2 to be concise and no preamble. Llama 3 can better follow instructions.\n",
"\n",
"Check out this improvement that combines a role, rules and restrictions, explicit instructions, and an example:"
]
Expand Down Expand Up @@ -766,7 +771,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.10.14"
},
"last_base_url": "https://bento.edge.x2p.facebook.net/",
"last_kernel_id": "161e2a7b-2d2b-4995-87f3-d1539860ecac",
Expand Down

0 comments on commit 23afbd4

Please sign in to comment.