A gallery of my DeepSeek generated fragment shaders.

Yes. That’s a great idea.

YES. That was on my radar. I have the code in place to allow it to debug/revise what it generated, and it can also generate small variants, I just haven’t built any ui around it.

If you have a machine capable of running ollama + deepseek-r1:32b model, let me know and I’ll make the repo public and you can try it. It also works with 14b model but its way sloppier.
running a gtx 4090 w 24gigs of ram here.

Now if you click the shader, it shows the code, and if you click the disk icon, it gives you a single .html that you can doubleclick to see the shader.

1 Like

Thanks, but I guess my RTX 2070 Super is not enough to run this project smoothly.
In any case I will be glad to make a contribution as a tester, just let me know when releases
will be more or less completed.

1 Like

Yeah thank you for checking it out! :smiley:

1 Like

@manthrax

when thisdata_686.json:floppy_disk::no_entry: is in view, my framerate drops from 60+ to 2.

The shader data_686.json is in another position but causes the same slowdown. It’s described as a grid with 4 nested loops, but it’s solid white.

1 Like

Nice demo project. Do you have an agent generating these on auto? If not you can function call to test the glsl output, if compilation is true save the shader (paginated to limit what’s in view) if not scrap and run again, the outputs that really work well can be categorised to distill a model specific to generating functional glsl.

Edit:

You’d use something like this in a function calling agent…

Python (I guess you could do this in js but depends on the agent infrastructure)

import pygame
from OpenGL.GL import *

def test_glsl(shader_code, shader_type):
    """
    Tests if GLSL shader code compiles successfully.

    Args:
        shader_code (str): The GLSL shader code as a string.
        shader_type (GLenum): The type of shader (GL_VERTEX_SHADER or GL_FRAGMENT_SHADER).

    Returns:
        bool: True if the shader compiles successfully, False otherwise.
    """
    # Initialize a Pygame OpenGL context
    pygame.init()
    pygame.display.set_mode((100, 100), pygame.OPENGL | pygame.DOUBLEBUF)

    # Create and compile shader
    shader = glCreateShader(shader_type)
    glShaderSource(shader, shader_code)
    glCompileShader(shader)

    # Check for compilation errors
    compile_status = glGetShaderiv(shader, GL_COMPILE_STATUS)
    if not compile_status:
        error_log = glGetShaderInfoLog(shader).decode()
        print("Shader Compilation Error:\n", error_log)
        return False

    return True

# Example usage:
vertex_shader_code = """
#version 330 core
layout(location = 0) in vec3 aPos;
void main() {
    gl_Position = vec4(aPos, 1.0);
}
"""

fragment_shader_code = """
#version 330 core
out vec4 FragColor;
void main() {
    FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
"""

print("Vertex Shader Compiles:", test_glsl(vertex_shader_code, GL_VERTEX_SHADER))
print("Fragment Shader Compiles:", test_glsl(fragment_shader_code, GL_FRAGMENT_SHADER))

Edit 2:

I think at this stage you could even try to detect preview vram usage per page (think how shader toy beta just took forever to load due to many realtime previews of complex shaders) considering this could be managed as a “category” of shader

{lightweight:[n % vram =< 256mb] , medium:[n % vram < 1024mb], ], heavy:[n % vram > 4096] } 
import pygame
from OpenGL.GL import *

def get_vram_usage():
    """Gets available GPU memory (only works on NVIDIA & AMD)."""
    mem_info = GLint()
    
    # Try NVIDIA first
    glGetIntegerv(0x9049, mem_info)  # GL_GPU_MEM_INFO_CURRENT_AVAILABLE_MEM_NVX
    if mem_info.value > 0:
        return mem_info.value  # Available memory in KB

    # Try AMD next
    glGetIntegerv(0x87FC, mem_info)  # GL_TEXTURE_FREE_MEMORY_ATI
    if mem_info.value > 0:
        return mem_info.value  # Available memory in KB

    return -1  # VRAM query not supported

def compile_shader(shader_code, shader_type):
    """Compiles a shader and checks for errors."""
    shader = glCreateShader(shader_type)
    glShaderSource(shader, shader_code)
    glCompileShader(shader)

    if not glGetShaderiv(shader, GL_COMPILE_STATUS):
        print("Shader Compilation Error:\n", glGetShaderInfoLog(shader).decode())
        return None
    return shader

def check_multiple_shaders_vram_usage(shader_list):
    """
    Compiles multiple shaders and estimates total VRAM usage.

    Args:
        shader_list (list): List of (shader_code, shader_type) tuples.

    Returns:
        dict: { 'total_vram_used': KB, 'shader_vram_usage': {shader_index: KB}, 'page_size': KB, 'shaders_per_page': int }
    """
    pygame.init()
    pygame.display.set_mode((100, 100), pygame.OPENGL | pygame.DOUBLEBUF)

    initial_vram = get_vram_usage()
    if initial_vram == -1:
        print("VRAM monitoring not supported on this GPU.")
        return {}

    shader_vram_usage = {}
    compiled_shaders = []
    
    for i, (shader_code, shader_type) in enumerate(shader_list):
        before_vram = get_vram_usage()
        
        shader = compile_shader(shader_code, shader_type)
        if not shader:
            continue
        
        compiled_shaders.append(shader)
        
        after_vram = get_vram_usage()
        shader_usage = before_vram - after_vram if after_vram >= 0 else 0
        shader_vram_usage[i] = shader_usage
    
    final_vram = get_vram_usage()
    total_vram_used = initial_vram - final_vram if final_vram >= 0 else sum(shader_vram_usage.values())

    # Estimate VRAM paging
    estimated_page_size = 4096  # Assume 4MB (4096 KB) per page, can be adjusted
    shaders_per_page = estimated_page_size // (sum(shader_vram_usage.values()) / len(shader_list)) if shader_list else 0

    return {
        'total_vram_used': total_vram_used,
        'shader_vram_usage': shader_vram_usage,
        'page_size': estimated_page_size,
        'shaders_per_page': int(shaders_per_page)
    }

# Example shader list
shaders = [
    ("""
    #version 330 core
    layout(location = 0) in vec3 aPos;
    void main() {
        gl_Position = vec4(aPos, 1.0);
    }
    """, GL_VERTEX_SHADER),
    
    ("""
    #version 330 core
    out vec4 FragColor;
    void main() {
        FragColor = vec4(1.0, 0.0, 0.0, 1.0);
    }
    """, GL_FRAGMENT_SHADER)
]

vram_info = check_multiple_shaders_vram_usage(shaders)
print("Total VRAM Used (KB):", vram_info.get('total_vram_used', 'N/A'))
print("Shader VRAM Usage (KB):", vram_info.get('shader_vram_usage', {}))
print("Estimated VRAM Page Size (KB):", vram_info.get('page_size', 'N/A'))
print("Shaders Per Page:", vram_info.get('shaders_per_page', 'N/A'))
1 Like

Maybe you have seen it already, but I am getting this on load:

1 Like

That sounds like what I’m doing. I have an agent generating the shaders.

I attempt to run it, check for errors… if no errors, it gets added to the “database”. (Json file)
I think there are 2 major bottlenecks… one is downloading each shader from a separate file (im up to about 1000 shaders now)… and the other is just compilation time.

I think I’m going to start storing everything in one giant json that gets loaded at startup… haven’t gotten around to that yet tho.

1 Like

Those are mostly expected…
The attempt to load /models only works if the AI is running locally… when it fails, the UI then going into viewing only mode.
But things are definitely slowing down as the shader catalog is getting big. I’m over 1k shaders now… and things are loading slow on my mid/high range gaming desktop, though once everything is loaded it seems ok.

2 Likes

Slowdown is really noticeable, even after loading, zooming helps - maybe limiting the amount of sample displayed simultaneously will be the solution.

PS: Now I think that making the repository public was not too bad idea, at least for bug reports collection.

3 Likes

Forked - works - thanks !

1 Like

Nice ! It could be the cover of a beatles’ album made in the 80’s.

1 Like