Skip to content
View B34STW4RS's full-sized avatar

Highlights

  • Pro

Block or report B34STW4RS

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
B34STW4RS/README.md

Konstantin Rusinevich (B34STW4RS)

ML Engineer & Game Systems Architect

20 years of engineering experience, now applied to bridging traditional game engines (C++ / Unreal) with neural world models (PyTorch).

Research focus: real-time world models and ethical data synthesis.

Interested in LLM-assisted tooling and prompt engineering as a way to revisit and extend classic game experiences.

Active professional work is primarily in private organizational repositories.

This profile will contain selected public artifacts and demos.


01. Neural World Models & Inverse Dynamics (Proprietary Research)

Note: The underlying architecture is proprietary. Below are outputs demonstrating the system's ability to synthesize worlds and infer control states. This was entirely a team effort, of some of the most talented people I ever had the pleasure to work with. Everything was trained on a single local 4090 for v1 and a single local 5090 for v2 and is capable of inference locally at 20fps on a single 4090 (imagine if we had the resources to go bigger.)

Phase 1: V1 Research Prototype

  • Goal: Impressed by sudden emergence of world models, my team coalesced around the idea of creating our own proprietary version of the technology.
  • Constraint: Optimized to train entirely on a single consumer GPU (RTX 4090).
  • Pipeline Optimization: To solve training bottlenecks, we decoupled the data pipeline. We pre-encoded inputs and frames into tensor chunks, effectively splitting the process to maximize throughput.
  • Validation: Initially benchmarked against Minecraft data to verify our novel architecture (distinct from SOTA), then validated on a barebones UE5 environment.
Static / In-Place Movement / Motion
Minecraft_InPlace_01.mp4
UE_Neural_GameDreamer_v_01.mp4

V1 Inverse Dynamics Predictor
Concept developed post-V2, demonstrated here running on the V1 architecture. This validated our ability to infer inputs from raw video, resolving the strict requirement for ground-truth pairs during training.
UE_Neural_Game_Dreamer_p_01.mp4


Phase 2: V2 Increased Visual Fidelity

  • Goal: To establish a fully ethical, high-fidelity data source. This version utilized an expanded architecture to handle increased environment complexity.
  • Constraint: Optimized to train on a single RTX 5090 while maintaining inference capability on a 4090.
  • Data Harvesting: To solve data volume bottlenecks, we deployed bots with multiple viewports, traversing the game world to capture 4x more data per instance.
  • Validation: The architecture proved highly capable, though the results highlighted an exponential need for data scaling to eliminate artifacts.
Static / In-Place Movement / Motion
UE_Neural_GameDream_v_02_InPlace.mp4
UE_Neural_Game_Dream_v_02_01.mp4

V2 Inverse Dynamics Predictor
Refined inference handling our complex environment video data.
UE_Neural_Game_Dream_p_02_01.mp4



🧬 The "Data Factory": Ethical Data Lineage

To ensure 100% data ownership and avoid excessive web-scraping, we authored a custom "Data Laboratory" in Unreal Engine 5.

  • Process: Automated agents navigate a custom-built 3D world to capture perfectly synchronized Frame + Input pairs.
  • Objective: To train the Predictor on this ground-truth data, enabling future "zero-shot" auto-labeling of external video sources.
UE_AI_Gen_Assets_World.mp4

Timelapse: Automated data harvesting in the synthetic UE5 environment.

02. Rapid Concept Prototyping with LLMs

Testing whether modern LLMs can accelerate game development workflows. Each prototype below was built from scratch in around 10 minutes using various available LLMs to port game logic to standalone HTML/JS implementations.

Goal: Validate that LLMs enable developers to test wildly different game concepts without investing days per prototype.

🔫 Wolf3D-Style Shooter

Moody first-person raycaster with shooting mechanics.

▶️ Play Live Demo

CatacombGenesis.mp4

🎮 Dungeon Crawler Style RPG

First-person raycaster with multiple characters and RPG systems.

▶️ Play Live Demo

AbyssalGrid.mp4

🏎️ Sprite Scaler Racer

OutRun-inspired racing with vintage-style sprite scaling.

▶️ Play Live Demo

NeonDrift.mp4

Development time: ~10 minutes each | Stack: Pure HTML5/JS, no frameworks | Workflow: Iterative prompting Claude, Gemini, and ChatGPT


03. Open Source Contribution

ComfyUI Custom Nodes

  • Project: ComfyUI-itsB34ST-Nodes
  • Focus: Custom node systems for advanced generative workflows, including stylization and animation control.
ComfyUI Workflow

Neural Stylization Research

  • Project: Modern-Neuro-Stylize
  • Focus: Modernized neural style transfer experiments with custom architecture modifications.
Neural Stylization Example

Technical Toolbox

  • Languages: Python, JavaScript/TypeScript, C++, HLSL
  • Machine Learning: PyTorch, Custom Architecture Design, Synthetic Data Pipelines
  • Engines: Unreal Engine 5, Custom Neural Renderers
  • **Community:**Senior Moderator at Banodoco (ML Community)

© 2026 Konstantin Rusinevich. All rights reserved. Media assets are strictly for demonstration purposes and remain the property of their respective owners.

Pinned Loading

  1. ComfyUI-itsB34ST-Nodes ComfyUI-itsB34ST-Nodes Public

    A set of nodes and utilities for use with Comfy UI.

    Python

  2. llm-game-prototypes llm-game-prototypes Public

    Rapid prototypes of various retro-style games using LLMs.

    HTML