Slide 18
Slide 18 text
Appendix
18
● WasmEdgeRuntime
https://wasmedge.org/
● WasmEdge Provides a Better Way to Run LLMs on the Edge
https://www.secondstate.io/articles/wasmedge-ggml-plugin/
● WASM Runtimes vs. Containers: Cold Start Deplays (Part 1)
https://levelup.gitconnected.com/wasm-runtimes-vs-containers-per
formance-evaluation-part-1-454cada7da0b
● Metaの「Llama 2」をベースとした商用利用な日本語LLMを公開しまし
た。
https://note.com/elyza/n/na405acaca130
● GGUF Models
● https://github.com/second-state/LlamaEdge/blob/main/models.md
● ELYZA-japanese-Llama-2-7bをM1 Mac上でRustで動かす