From 0c2cb60fab341a542bb984c8aa20dceb00e280c9 Mon Sep 17 00:00:00 2001 From: ldecarvalho-doc <82805470+ldecarvalho-doc@users.noreply.github.com> Date: Tue, 3 Dec 2024 10:31:06 +0100 Subject: [PATCH] fix(llm): reviews 25/11 (#4067) --- .../reference-content/llama-3-70b-instruct.mdx | 6 +++--- .../reference-content/sentence-t5-xxl.mdx | 10 +++++----- .../kubernetes/how-to/use-nvidia-gpu-operator.mdx | 2 +- dedibox-network/ipv6/how-to/configure-ipv6-windows.mdx | 2 +- dedibox-network/ipv6/how-to/enable-ipv6-slaac.mdx | 2 +- dedibox-network/ipv6/how-to/request-prefix.mdx | 2 +- 6 files changed, 12 insertions(+), 12 deletions(-) diff --git a/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx b/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx index 93d2b4e747..93c9fab280 100644 --- a/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx +++ b/ai-data/managed-inference/reference-content/llama-3-70b-instruct.mdx @@ -7,7 +7,7 @@ content: paragraph: This page provides information on the Llama-3-70b-instruct model tags: dates: - validation: 2024-05-28 + validation: 2024-12-03 posted: 2024-05-28 categories: - ai-data @@ -34,7 +34,7 @@ meta/llama-3-70b-instruct:fp8 ## Model introduction Meta’s Llama 3 is an iteration of the open-access Llama family. -Llama 3 was designed to match the best proprietary models, enhanced by community feedback for greater utility and responsibly spearheading the deployment of LLMs. +Llama 3 was designed to match the best proprietary models, enhanced by community feedback for greater utility and responsibly spearheading the deployment of LLMs. With a commitment to open-source principles, this release marks the beginning of a multilingual, multimodal future for Llama 3, pushing the boundaries in reasoning and coding capabilities. ## Why is it useful? @@ -77,7 +77,7 @@ Make sure to replace `` and `` with your actual [I ### Receiving Inference responses -Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. +Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. Process the output data according to your application's needs. The response will contain the output generated by the LLM model based on the input provided in the request. diff --git a/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx b/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx index 0c222e4ffe..bc4bb03d51 100644 --- a/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx +++ b/ai-data/managed-inference/reference-content/sentence-t5-xxl.mdx @@ -7,7 +7,7 @@ content: paragraph: This page provides information on the Sentence-t5-xxl embedding model tags: embedding dates: - validation: 2024-05-22 + validation: 2024-12-03 posted: 2024-05-22 categories: - ai-data @@ -31,12 +31,12 @@ sentence-transformers/sentence-t5-xxl:fp32 | Instance type | Max context length | | ------------- |-------------| -| L4 | 512 (FP32) | +| L4 | 512 (FP32) | ## Model introduction -The Sentence-T5-XXL model represents a significant evolution in sentence embeddings, building on the robust foundation of the Text-To-Text Transfer Transformer (T5) architecture. -Designed for performance in various language processing tasks, Sentence-T5-XXL leverages the strengths of T5's encoder-decoder structure to generate high-dimensional vectors that encapsulate rich semantic information. +The Sentence-T5-XXL model represents a significant evolution in sentence embeddings, building on the robust foundation of the Text-To-Text Transfer Transformer (T5) architecture. +Designed for performance in various language processing tasks, Sentence-T5-XXL leverages the strengths of T5's encoder-decoder structure to generate high-dimensional vectors that encapsulate rich semantic information. This model has been meticulously tuned for tasks such as text classification, semantic similarity, and clustering, making it a useful tool in the RAG (Retrieval-Augmented Generation) framework. It excels in sentence similarity tasks, but its performance in semantic search tasks is less optimal. ## Why is it useful? @@ -66,5 +66,5 @@ Make sure to replace `` and `` with your actual [I ### Receiving Inference responses -Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. +Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. Process the output data according to your application's needs. The response will contain the output generated by the embedding model based on the input provided in the request. diff --git a/containers/kubernetes/how-to/use-nvidia-gpu-operator.mdx b/containers/kubernetes/how-to/use-nvidia-gpu-operator.mdx index a313df5c83..798771aedd 100644 --- a/containers/kubernetes/how-to/use-nvidia-gpu-operator.mdx +++ b/containers/kubernetes/how-to/use-nvidia-gpu-operator.mdx @@ -7,7 +7,7 @@ content: paragraph: This page explains how to use the NVIDIA GPU operator on Kapsule and Kosmos with GPU Instances tags: kubernetes kubernetes-kapsule kapsule cluster gpu-operator nvidia gpu dates: - validation: 2024-05-22 + validation: 2024-12-03 posted: 2023-07-18 categories: - containers diff --git a/dedibox-network/ipv6/how-to/configure-ipv6-windows.mdx b/dedibox-network/ipv6/how-to/configure-ipv6-windows.mdx index d2759a06eb..06626236af 100644 --- a/dedibox-network/ipv6/how-to/configure-ipv6-windows.mdx +++ b/dedibox-network/ipv6/how-to/configure-ipv6-windows.mdx @@ -7,7 +7,7 @@ content: paragraph: This page explains how to configure an IPv6 subnet on a Dedibox running Windows Server. tags: dedibox ipv6 windows subnet dates: - validation: 2024-05-20 + validation: 2024-12-03 posted: 2021-08-03 categories: - dedibox-network diff --git a/dedibox-network/ipv6/how-to/enable-ipv6-slaac.mdx b/dedibox-network/ipv6/how-to/enable-ipv6-slaac.mdx index d30abc4268..ef20df316f 100644 --- a/dedibox-network/ipv6/how-to/enable-ipv6-slaac.mdx +++ b/dedibox-network/ipv6/how-to/enable-ipv6-slaac.mdx @@ -7,7 +7,7 @@ content: paragraph: This page explains how to enable IPv6 SLAAC on Dedibox servers. tags: dedibox slaac ipv6 dates: - validation: 2024-05-20 + validation: 2024-12-03 posted: 2021-08-03 categories: - dedibox-network diff --git a/dedibox-network/ipv6/how-to/request-prefix.mdx b/dedibox-network/ipv6/how-to/request-prefix.mdx index 615d932039..7319bb368a 100644 --- a/dedibox-network/ipv6/how-to/request-prefix.mdx +++ b/dedibox-network/ipv6/how-to/request-prefix.mdx @@ -7,7 +7,7 @@ content: paragraph: This page explains how to request a free /48 IPv6 prefix for Dedibox servers. tags: dedibox ipv6 prefix dates: - validation: 2024-05-20 + validation: 2024-12-03 posted: 2021-08-03 categories: - dedibox-network