LLMに入力する基本的なメッセージ history: 今までの会話 context: リトリーバーのドキュメント(RAGで使う やつ) LLMのAPIは、各社ほぼ同じだが、ちょっとずつ違う { "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Say this is a "temperature": 0.7 } { "contents": [ {"role":"user", "parts":[{ "text": "Write the first line of a story about a ma }] }, ] } const llmResponse = await generate({ model: gemini15Flash, prompt: 'Tell me a joke.', }); console.log(await llmResponse.text());
format: json schema: name: string price: integer ingredients(array): string --- Generate a menu item that could be found at a {{theme}} themed restaurant.
output: format: json schema: name: string price: integer ingredients(array): string --- Generate a menu item that could be found at a {{theme}} themed restaurant.
schema: name: string price: integer ingredients(array): string --- model: vertexai/gemini-1.5-flash --- Generate a menu item that could be found at a {{theme}} themed restaurant.
langchain_core.messages import HumanMessage, SystemMessage from langchain_google_vertexai import ChatVertexAI from langchain_core.output_parsers import StrOutputParser initialize_app() @https_fn.on_request( memory=options.MemoryOption.GB_1, ) def on_request_example(req: https_fn.Request) -> https_fn.Response: model = ChatVertexAI(model="gemini-1.5-flash") messages = [ SystemMessage(content="Translate the following from English into Italian"), HumanMessage(content="hi!"), ] result = model.invoke(messages) parser = StrOutputParser() return https_fn.Response(parser.invoke(result))