Slide 40
Slide 40 text
How to make a dataset (initial log entry)
{
"insertId": "rroswqf9rk00w",
"jsonPayload": {
"message": "1a3efd1b-6aa2-41e2-92c2-56f58471acf3;;;a9a840e2-521c-4b3b-82a0-80bee1ca8122;;;question;;;If I get Input is too long for requested model when sending data to
sonnet 3.5, how can I manage this? split the text and do more than call or how ?"
},
…
},
"timestamp": "2024-10-14T07:40:44.646Z",
"severity": "INFO",
"labels": {
"loggerName": "com.vaadin.demo.views.ChatView",
"levelValue": "20000",
"levelName": "INFO"
},
"logName": "projects/langchain4j/logs/application-log",
"receiveTimestamp": "2024-10-14T07:40:44.833940511Z"
}
…
{
"insertId": "8d5lksf9xmvie",
"jsonPayload": {
"message":
"1a3efd1b-6aa2-41e2-92c2-56f58471acf3;;;retrieved;;;tutorials\\1-chat-and-language-models.md;7;;integrations\\language-models\\jlama.md;7;;tutorials\\6-tools.md;6"
},
…
},
{
"insertId": "3bya7df19n47j",
"jsonPayload": {
"message": "1a3efd1b-6aa2-41e2-92c2-56f58471acf3;;;a9a840e2-521c-4b3b-82a0-80bee1ca8122;;;answer;;;If you encounter the error \"Input is too long for requested model\"
when sending data to the Sonnet 3.5 model, you can manage this by splitting the text into smaller chunks and making multiple calls to the model. Since LLMs do not maintain
the state of the conversation, you'll need to keep track of the context yourself.\n\nHere’s how you can handle it:\n\n1. **Split the Text:** Break your input into smaller,
manageable pieces that fit within the model's input limits.\n\n2. **Maintain Context:** When making multiple calls, ensure you keep the relevant context from previous
interactions. This may involve including previous user and AI messages in each new request.\n\n3. **Use `UserMessage`:** Create a `UserMessage` for each chunk of text. You
can use `new UserMessage(\"Your text here\")` or `UserMessage.from(\"Your text here\")` to do this.\n\n4. **Call the Model:** For each chunk, generate a response using the
model, like so:\n ```java\n UserMessage userMessage = UserMessage.from(\"Your chunk of text here\");\n Response response =
model.generate(singletonList(userMessage), toolSpecifications);\n AiMessage aiMessage = response.content();\n ```\n\n5. **Combine Responses:** After processing all
chunks, combine the responses to form a complete output.\n\nBy following these steps, you can effectively manage long inputs and maintain a coherent conversation with the
model."
},
…
}