Replies: 6 comments 6 replies
-
|
@Webbanditten you got some GPUs? 👉👈🥺 |
Beta Was this translation helpful? Give feedback.
-
|
This can be your own project. Kepler isn't going to follow this architecture. Kepler is very stable and there's no reason why this should change. I would welcome it in a new project, but I am busy with my own hobbies. |
Beta Was this translation helpful? Give feedback.
-
|
Vulkan based acceleration in Ollama is working ✅ (Update: no it's because Claude Code isn't really designed to be used with other models, OpenCode works fine) .. also gotta find a way to limit the GPU wattage during compute lol, can't have it shooting up to 300W all the time. Electricity expensive here 😔 (Update: use LACT, set Power Profile Mode to
|
Beta Was this translation helpful? Give feedback.
-
|
Okay some more research, learned some things. Will move this discussions and write this up somewhere else later, for now just need to brain dump somewhere before I lose track of these thoughts. Other relevant discussions/issues/links:
Observations so far:
|
Beta Was this translation helpful? Give feedback.
-
|
@Webbanditten How much dedotated WAM do you have? 👉👈🥺 List of models worth trying: Probably
|
Beta Was this translation helpful? Give feedback.
-
|
I'm really intrigued by the Granite Code model from IBM, and the fact that it's very open, well researched, documented and reproducible. I feel like its usefulness can be very much improved by giving it access to MCPs and tools using OpenCode. Something like this. And the best part: it fits entirely in 16GB of VRAM, even with a 125K context. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
Long read, might want to get some coffee.
Continuing on some ramblings here, I was wrong; it seems we are already at the stage in the development cycle where Claude Code can be used without relying on the cloud, and without being vendor locked-in to some "AI" company charging one thru the nose for "tokens". I hope it remains this way, and I hope the ML landscape and hardware improves to a point where people like us can start training models based on personal knowledgebases. (without it requiring a datacenter, nor ~60 GW) That would be true independence, true open source/libre/free ML.
Unless that's already a thing too? Let me know. I've been out of the loop.
I think with the help of Claude Code (either the cloud version, or self-hosted on single machine or many machine appliances), it becomes attainable to get a real proper (and improved) re-implementation of Aapo's original design from this paper (Habbo Architecture v2).
It has been attempted before, but never finished. At that time there were no force multipliers like Claude Code available (okay, Indian slaves maybe, using MTurk lol), so it would've been a very huge undertaking which most (including me) wouldn't commit to. Quackster did for this project, which is astonishable and commendable.
Now these force multipliers do exist, and the context window of Claude Code seems large enough for it to grasp higher level concepts to where it can write effective code for a many-services many-node/machine Habbo server implementation.
But before one starts burning time and consume energy for nothing, it should be given the best chance and ability to comprehend the task. I think the following strategy will work:
Planning phase
Create a monorepo
Add protocol and research detail
Use Pandoc to convert between different formats to Markdown
Relevant files from here as Markdown (thanks love, I guess)
Puomi Wiki
Relevant RaGEZONE topics as Markdown, Nillus his packet explainer, some posts from Moogkip etc.
Feed it enough knowledge about Aaron to despise him
Protocol logic from emulators and other tooling read by Claude digested to Markdown
Reactor pattern from Woodpecker and Thor 2.0 digested by Claude to Markdown
Add repositories as submodule in a repos folder, to aid in building context
Once sufficient complete emulators repos added, let Claude Code digest the important game logic loops to Markdown text, seperated by Reactor which should roughly equal the client CCT module seperation.
Add Claude Code instructions/skills/commands to that monorepo.
To interpret those Markdown files to aid in the implementation
Along with instructions in the README on how to execute the Claude Code commands
Implementation details meaning communication protocol details, game logic loops and documentation on implementing game servers using .NET Orleans
Revision Phase
Execute the Claude Code commands generated to extract the game features to a Markdown
At this point, if all game features are properly documented; we can clean the repo up and delete all sources to lower context cost.
Ask Claude Code to generate and save files in the repo containing detailed written plans rendered to Markdown, plans to implement Orleans services, one per game feature in the client with the game logic from the previous generated definition file.
Execution Phase
Clean up repo again, only leaving the Markdown files with the detailed plans on how to implement each per game feature Orleans service.
Ask Claude to execute each plan, then clear context and continue with the next plan.
Profit???
Heavily simplified ofcourse, but in theory this should work. In practice, we'll see. I'm configuring Ollama to use my 9070 XT locally for Claude Code right now.
Beta Was this translation helpful? Give feedback.
All reactions