Upgrade to Pro — share decks privately, control downloads, hide ads and more …

ARC AGI Kaggle with llama3 - First Steps

ARC AGI Kaggle with llama3 - First Steps

Lightning talk at PyDataLondon 2024 July, I spoke on the Kaggle ARC AGI competition and how I've made Llama 3 write code to solve a couple of the challenges.
https://www.meetup.com/pydata-london-meetup/events/301796857/

ianozsvald

August 06, 2024
Tweet

More Decks by ianozsvald

Other Decks in Technology

Transcript

  1. LLMs great at memorisation, can they reason? F. Chollet argues

    that they’re bad at reasoning $1M prize if LLM/other can solve these challenges Abstract shapes “initial → target” in JSON Open-weights models only (runs in off-line env) Abstraction & Reasoning Challenge By [ian]@ianozsvald[.com] Ian Ozsvald
  2. Llama.cpp with quantised Llama 8B (and 70B) Python llama.cpp bindings

    Ask for 200 solutions Try grid, list, grid+list representations Grid only – poor. List better. Grid+list slightly better First solution By [ian]@ianozsvald[.com] Ian Ozsvald
  3. Llama (normally) writes code By [ian]@ianozsvald[.com] Ian Ozsvald Bad syntax,

    no code, raw_input, injection back into the training data (changing ints to strings)
  4. Llama 3 8B IQ2 (heavy quant), some run correctly on

    3x3 “train” problem Very fast, runs on 3090 (24GB VRAM) Do you use Llama 3? Alpaca? ROPE? Do you have text correctness metrics? Summary By [ian]@ianozsvald[.com] Ian Ozsvald