curl -sS "http://localhost:8000/v1/completions" -H "Content-Type: application/json" -H "Authorization: Bearer $VLLM_API_KEY" -d "{ \"model\": \"meta-llama/$MODEL\", \"prompt\": \"Jim Keller is?\", \"max_tokens\": 60, \"temperature\": 0 }" | jq { "id": "cmpl-9c65c696ebaa4031a5900aaec091ab11", "object": "text_completion", "created": 1761145166, "model": "meta-llama/Llama-3.1-8B-Instruct", "choices": [ { "index": 0, "text": " (Part 2)\nJim Keller is a renowned American computer architect and engineer, best known for his work at AMD and Apple. He is credited with designing the x86-64 architecture, which is the foundation of modern personal computers.\nKeller's career spans over three decades, with significant contributions to", "logprobs": null, "finish_reason": "length", "stop_reason": null, "prompt_logprobs": null } ], "usage": { "prompt_tokens": 5, "total_tokens": 65, "completion_tokens": 60, "prompt_tokens_details": null } } https://docs.tenstorrent.com/getting-started/vLLM-servers.html#deploying-llms ※: tt-inference-server のブランチは bh-getting-started をまずは試して、 上手くいったら dev とかをどうぞ