Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Node.jsでllama_2_.pdf

Optimisuke
February 21, 2024

 Node.jsでllama_2_.pdf

kansai.ts #5 2024/02/21 で発表したスライドです。

Optimisuke

February 21, 2024
Tweet

More Decks by Optimisuke

Other Decks in Technology

Transcript

  1. node-llama-cpp 試してみた import { fileURLToPath } from "url"; import path

    from "path"; import { LlamaModel, LlamaContext, LlamaChatSession } from "node-llama-cpp"; const __dirname = path.dirname(fileURLToPath(import.meta.url)); const model = new LlamaModel({modelPath: path.join(__dirname, "models", "ELYZA-japanese-Llama-2-7b-fast-instruct-q4_K_M.gguf")}); const context = new LlamaContext({ model }); const session = new LlamaChatSession({ context }); const q1 = "元気?"; console.log("User: " + q1); const a1 = await session.prompt(q1); console.log("AI: " + a1);
  2. LangChain.js 試してみた import { LlamaCpp } from "@langchain/community/llms/llama_cpp"; const llamaPath

    = "/Users/hoge/hello-node-llama-cpp/models/ELYZA-japanese-Llama-2-7b-fast-instru ct-q4_K_M.gguf"; const model = new LlamaCpp({ modelPath: llamaPath }); const question = "ラマって何?"; console.log(`User: ${question}`); const response = await model.invoke(question); console.log(`AI : ${response}`);
  3. LangChain.js 試してみた import { ChatLlamaCpp } from "@langchain/community/chat_models/llama_cpp"; const llamaPath

    = "/Users/hoge/hello-node-llama-cpp/models/ELYZA-japanese-Llama-2-7b-fast-instru ct-q4_K_M.gguf"; const model = new ChatLlamaCpp({ modelPath: llamaPath, temperature: 0.7 }); const stream = await model.stream("ラマって何?"); for await (const chunk of stream) { console.log(chunk.content); }
  4. node-llama-cppのコードを見てみた class LLAMAModel : public Napi::ObjectWrap<LLAMAModel> { public: llama_model_params model_params;

    llama_model* model; LLAMAModel(const Napi::CallbackInfo& info) : Napi::ObjectWrap<LLAMAModel>(info) { model_params = llama_model_default_params(); // Get the model path std::string modelPath = info[0].As<Napi::String>().Utf8Value(); llama_backend_init(false); model = llama_load_model_from_file(modelPath.c_str(), model_params); } };
  5. node-llama-cppのコードを見てみた execute_process(COMMAND node -p "require('node-addon-api').include.slice(1,-1)" WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE NODE_ADDON_API_DIR OUTPUT_STRIP_TRAILING_WHITESPACE)

    include_directories(${NODE_ADDON_API_DIR} ${CMAKE_JS_INC}) include_directories("llama.cpp") file(GLOB SOURCE_FILES "addon.cpp") target_link_libraries(${PROJECT_NAME} "llama") execute_process(COMMAND ${CMAKE_AR} /def:${CMAKE_JS_NODELIB_DEF} /out:${CMAKE_JS_NODELIB_TARGET} ${CMAKE_STATIC_LINKER_FLAGS})
  6. node-llama-cppのコードを見てみた import {createRequire} from "module"; const require = createRequire(import.meta.url); export

    async function loadBin(): Promise<LlamaCppNodeModule> { const usedBinFlag = await getUsedBinFlag(); if (usedBinFlag === "prebuiltBinaries") { const prebuildBinPath = await getPrebuildBinPath(); return require(prebuildBinPath); } }
  7. LangChain.jsのコードを見てみた import { LlamaModel, LlamaContext, LlamaChatSession } from "node-llama-cpp"; import

    { LlamaBaseCppInputs, createLlamaModel, createLlamaContext, createLlamaSession} from "../utils/llama_cpp.js"; export class LlamaCpp extends LLM<LlamaCppCallOptions> { _model: LlamaModel; _context: LlamaContext; _session: LlamaChatSession; constructor(inputs: LlamaCppInputs) { this._model = createLlamaModel(inputs); this._context = createLlamaContext(this._model, inputs); this._session = createLlamaSession(this._context); } async _call(prompt: string): Promise<string> { const promptOptions = {}; const completion = await this._session.prompt(prompt, promptOptions); return completion; } } https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-community/src/llms/llama_cpp.ts
  8. llama.cppのコードを見てみた ifdef LLAMA_METAL MK_CPPFLAGS += -DGGML_USE_METAL MK_LDFLAGS += -framework Foundation

    -framework Metal -framework MetalKit OBJS += ggml-metal.o ifdef LLAMA_METAL_NDEBUG MK_CPPFLAGS += -DGGML_METAL_NDEBUG endif endif # LLAMA_METAL ifdef LLAMA_METAL ggml-metal.o: ggml-metal.m ggml-metal.h $(CC) $(CFLAGS) -c $< -o $@ endif # LLAMA_METAL https://github.com/ggerganov/llama.cpp/blob/master/Makefile