3D Soccer tactic app + 3D scene AI agent builder + AI decision engine + Computer vision

Hey!!

I made an app to model football interactions and decisions using R3F. It is in closed beta now, so just ask and I give you access :slight_smile:

Also made an AI Agent that use custom tooling to build on the pitch with annotations and player position + movement + actions. Right now its LLM based operating on stores via zod json but I think i am gonna push it deeper into researching on spatial embeddings to make it directly understand pitch set up and actions.

Im also trying to build a AI decision engine to make all players alive and independant in order to offer real football tactic challenge.

Also studying Computer vison on it, right now I can reconstruct an action from live boradcast image thanks to CV model and jersey color clustering as a tool of the agent chat, but also considering adding video analysis to reconstruct continuous positionnal coordinates and events and therefore the game itself.

I just start talking about it from 0 online presence, its gonna take a long time ahah, don’t care Im just having fun.

project closed beta launch post https://x.com/opentacticAI/status/2034227657856962593?s=20

first professional dev account ever ^^ https://x.com/jtopentactic

The post throws around a lot of buzzwords but doesn’t actually show much of the system working. AI agents, decision engine, computer vision, spatial embeddings, LLM tooling, but there is almost no concrete explanation of what is implemented versus what is just planned. Right now it reads more like a list of ideas than a working project.

It’s also very hard to understand the actual technical architecture. How the agents interact with the scene, how decisions are computed, what kind of simulation model is used for players, or how the reconstruction from broadcast images actually works. Without showing real examples or technical details it’s impossible to evaluate what the system is actually doing.

The description also jumps between a lot of different ambitious directions at once. Tactical analysis platform, autonomous player simulation, computer vision reconstruction from broadcast, AI agents building plays on the pitch. Each one of those problems alone is a large research problem, so claiming all of them together without showing concrete results makes the scope feel unrealistic.

If the goal is to get serious feedback it would help a lot to show actual outputs. For example a clip of the reconstructed positions from computer vision, how the AI agents generate tactics, or how the decision engine controls players. Right now it mostly reads like a concept pitch rather than a technical showcase.

Hey there,

Do you need a beta access to verify everything is implemented and working ?

Looked at you messages on other posts, I suspect you to be a bot, are you ?

I will post every feature next week on youtube and twitter, do you need the accounts to follow me ?

Kind regards