Skip to content

Commit

Permalink
Merge pull request #1141 from rsvoboda/type.fixes.2024-12-06
Browse files Browse the repository at this point in the history
Typo fixes
  • Loading branch information
geoand authored Dec 6, 2024
2 parents d9801a9 + 61a3089 commit d7027e5
Show file tree
Hide file tree
Showing 10 changed files with 27 additions and 27 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -477,7 +477,7 @@ public void handleDeclarativeServices(AiServicesRecorder recorder,
List<DeclarativeAiServiceBuildItem> declarativeAiServiceItems,
List<SelectedChatModelProviderBuildItem> selectedChatModelProvider,
BuildProducer<SyntheticBeanBuildItem> syntheticBeanProducer,
BuildProducer<UnremovableBeanBuildItem> unremoveableProducer) {
BuildProducer<UnremovableBeanBuildItem> unremovableProducer) {

boolean needsChatModelBean = false;
boolean needsStreamingChatModelBean = false;
Expand Down Expand Up @@ -668,7 +668,7 @@ public void handleDeclarativeServices(AiServicesRecorder recorder,
// constructor to obtain an instance.
if (bi.isCustomRetrievalAugmentorSupplierClassIsABean()) {
configurator.addInjectionPoint(ClassType.create(retrievalAugmentorSupplierClassName));
unremoveableProducer
unremovableProducer
.produce(UnremovableBeanBuildItem.beanClassNames(retrievalAugmentorSupplierClassName));
}
}
Expand Down Expand Up @@ -724,34 +724,34 @@ public void handleDeclarativeServices(AiServicesRecorder recorder,
}

if (needsChatModelBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.CHAT_MODEL));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.CHAT_MODEL));
}
if (needsStreamingChatModelBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.STREAMING_CHAT_MODEL));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.STREAMING_CHAT_MODEL));
}
if (needsChatMemoryProviderBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.CHAT_MEMORY_PROVIDER));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.CHAT_MEMORY_PROVIDER));
}
if (needsRetrieverBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.RETRIEVER));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.RETRIEVER));
}
if (needsRetrievalAugmentorBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.RETRIEVAL_AUGMENTOR));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.RETRIEVAL_AUGMENTOR));
}
if (needsAuditServiceBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.AUDIT_SERVICE));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.AUDIT_SERVICE));
}
if (needsModerationModelBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.MODERATION_MODEL));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.MODERATION_MODEL));
}
if (needsImageModelBean) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.IMAGE_MODEL));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(LangChain4jDotNames.IMAGE_MODEL));
}
if (!allToolProviders.isEmpty()) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(allToolProviders));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(allToolProviders));
}
if (!allToolNames.isEmpty()) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(allToolNames));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(allToolNames));
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -600,9 +600,9 @@ public void cleanUp(LangChain4jRecorder recorder, ShutdownContextBuildItem shutd
}

@BuildStep
public void unremoveableBeans(BuildProducer<UnremovableBeanBuildItem> unremoveableProducer) {
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(ObjectMapper.class));
unremoveableProducer.produce(UnremovableBeanBuildItem.beanTypes(ModelAuthProvider.class));
public void unremovableBeans(BuildProducer<UnremovableBeanBuildItem> unremovableProducer) {
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(ObjectMapper.class));
unremovableProducer.produce(UnremovableBeanBuildItem.beanTypes(ModelAuthProvider.class));
}

@BuildStep
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
package io.quarkiverse.langchain4j.runtime.aiservice;

/**
* Exception thrown when a input or output guardrail validation fails.
* Exception thrown when an input or output guardrail validation fails.
* <p>
* This exception is not intended to be used in guardrail implementation.
*/
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/easy-rag.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ ingest them into an in-memory embedding store.

Apache Tika, a library for parsing various file formats, is used under the
hood, so your documents can be in any of its supported formats (plain text,
PDF, DOCX, HTML, etc), including images with text, which will be parsed
PDF, DOCX, HTML, etc.), including images with text, which will be parsed
using OCR (OCR requires to have the Tesseract library installed in your
system - see https://cwiki.apache.org/confluence/display/TIKA/TikaOCR).

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/ROOT/pages/enable-disable-integrations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

include::./includes/attributes.adoc[]

By default, all integrations with AI providers (OpenAI, HuggingFace, Azure OpenAI, etc) are enabled. This means that live calls are made to the configured AI provider.
By default, all integrations with AI providers (OpenAI, HuggingFace, Azure OpenAI, etc.) are enabled. This means that live calls are made to the configured AI provider.

Each provider has an `enable-integration` property (i.e. `quarkus.langchain4j.openai.enable-integration`, `quarkus.langchain4j.huggingface.enable-integration`, etc) that can be set to `false` to disable the integration. This property is read at runtime.
Each provider has an `enable-integration` property (i.e. `quarkus.langchain4j.openai.enable-integration`, `quarkus.langchain4j.huggingface.enable-integration`, etc.) that can be set to `false` to disable the integration. This property is read at runtime.

When disabled, any call made to the AI provider will end up in an `dev.langchain4j.model.ModelDisabledException` runtime exception being thrown.

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/ROOT/pages/guardrails.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ public interface Simulator {
In this example, the `VerifyHeroFormat` is executed first to check that the passed hero is valid.
Then, the `VerifyVillainFormat` is executed to check that the villain is valid.

If the `VerifyHeroFormat` fails, the `VerifyVillainFormat` may or may not be executed depending on whether the failure is fatal or not. For instance the `VerifyHeroFormat` could be implemented as it follows.
If the `VerifyHeroFormat` fails, the `VerifyVillainFormat` may or may not be executed depending on whether the failure is fatal or not. For instance, the `VerifyHeroFormat` could be implemented as it follows.

[source,java]
----
Expand Down Expand Up @@ -511,7 +511,7 @@ public class HallucinationGuard implements OutputGuardrail {
----

=== Rewriting the LLM output
It may happen that the output generated by the LLM is not completely satisfying, but it can be programmatically adjusted instead of attempting a retry or a remprompt, both implying a costly, time consuming and less reliable new interaction with the LLM. For instance it is quite common that an LLM produces the json of the data object that it is required to extract from the user prompt, but appends to it some unwanted explanation of why it generated that result, making the json unparsable, something like
It may happen that the output generated by the LLM is not completely satisfying, but it can be programmatically adjusted instead of attempting a retry or a reprompt, both implying a costly, time consuming and less reliable new interaction with the LLM. For instance, it is quite common that an LLM produces the json of the data object that it is required to extract from the user prompt, but appends to it some unwanted explanation of why it generated that result, making the json unparsable, something like

[source]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@ a| [[quarkus-langchain4j-watsonx_quarkus-langchain4j-watsonx-chat-model-temperat

[.description]
--
What sampling temperature to use,. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.
What sampling temperature to use. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.

We generally recommend altering this or `top_p` but not both.

Expand Down Expand Up @@ -1332,7 +1332,7 @@ a| [[quarkus-langchain4j-watsonx_quarkus-langchain4j-watsonx-model-name-chat-mod

[.description]
--
What sampling temperature to use,. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.
What sampling temperature to use. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.

We generally recommend altering this or `top_p` but not both.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@ a| [[quarkus-langchain4j-watsonx_quarkus-langchain4j-watsonx-chat-model-temperat

[.description]
--
What sampling temperature to use,. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.
What sampling temperature to use. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.

We generally recommend altering this or `top_p` but not both.

Expand Down Expand Up @@ -1332,7 +1332,7 @@ a| [[quarkus-langchain4j-watsonx_quarkus-langchain4j-watsonx-model-name-chat-mod

[.description]
--
What sampling temperature to use,. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.
What sampling temperature to use. Higher values like `0.8` will make the output more random, while lower values like `0.2` will make it more focused and deterministic.

We generally recommend altering this or `top_p` but not both.

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/web-search.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ WebSearchEngine engine;

and then use it by calling its `search` method.

If you want to let an chat model use web search by itself, there are
If you want to let a chat model use web search by itself, there are
generally two recommended ways to accomplish this: either by implementing a
tool that uses it, or as a content retriever inside a RAG pipeline. The
https://github.com/quarkiverse/quarkus-langchain4j/tree/main/samples/chatbot-web-search[chatbot-web-search]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ public interface ChatModelConfig {
Double presencePenalty();

/**
* What sampling temperature to use,. Higher values like <code>0.8</code> will make the output more random, while lower
* What sampling temperature to use. Higher values like <code>0.8</code> will make the output more random, while lower
* values
* like <code>0.2</code> will make it more focused and deterministic.
* <p>
Expand Down

0 comments on commit d7027e5

Please sign in to comment.