C64 receives port of Master's Degree in Law (LLM) Software, Albeit Incompletely
In an intriguing development, a large language model has been successfully adapted to run on a Commodore 64, the renowned 1982 personal computer. This remarkable accomplishment, dubbed the Llama2.c64 project, is the brainchild of Maciej "YTM/Elysium" Witkowiak, who painstakingly ported the Llama 2 model to the Commodore 64's unique architecture.
However, it's essential to note that the 8-bit machine lacks the native capacity to host the model. To circumvent this issue, a RAM Expansion Unit (REU) with at least 2MB is required to accommodate the model weights and support computations.
The version of the Llama 2 model employed is the "260K tinystories" model, which falls far short of advanced chat models like ChatGPT. Instead, it produces childlike story continuations when prompted, much like how a 3-year-old might complete a narrative using limited vocabulary and simple sentence structures.
While the model's capabilities are far from the sophisticated AI interactions we're accustomed to, there's a certain nostalgic charm in witnessing its responses on the classic C64 display. For those captivated by the prospect of tinkering with C64 projects, the Llama2.c64 team joyfully invites shared experiences and feedback.
Please be aware that this is not a fully-fledged ChatGPT clone, nor is it designed to tackle complex tasks such as homework, troubled relationships, or geopolitical crises. It's more akin to a fun curiosity that pushes the boundaries of AI and retro technology alike.
For those interested in experimenting with the Llama2.c64 on a real Commodore 64, execution speed is painfully slow, often taking hours for a single response. Emulators can offer some respite, albeit with time constraints unless running in warp mode.
In the grand scheme of AI advancements, the Llama2.c64 serves as a testament to the ingenuity of technical wizards and the surprising potential of vintage hardware. Its limitations, primarily memory constraints, simple model complexity, and slow processing speeds, make it more of a fascination than a practical tool for modern language processing. Yet, it's undeniable that this project pushes the boundaries of what's possible with AI, even on legacy hardware.
Hacking the Llama2.c64 project could potentially expose it to gadgets that provide additional memory for faster technology-driven computations, such as RAM Expansion Units. However, modifying the classic Commodore 64 for such purposes might require a careful understanding of both the limitations and the idiosyncrasies of both the AI model and the vintage hardware.