I upgraded the llm-gpt4all plugin to support running Llama 3 8B Instruct (thanks, Nomic AI): ``` llm install --upgrade llm-gpt4all llm -m Meta-Llama-3-8B-Instruct 'Write Python code to print 5 great names for a pet pelican' ``` The quantized model is a 4.34GB download and should run on machine with 8GB RAM - works great on my M2 MacBook Pro