{{ reply() }}
You should now be able to send prompts to the model and see the
responses in the template.
⚠ Note: Browsers support better options for streaming LLM responses:
https://developer.chrome.com/docs/ai/render-llm-responses
Making Angular Apps Smarter with Generative AI
Local and Offline-capable
Model inference LAB #3