I wanted to do a side-by-side comparison of my procedural animation to hand-authored animations on some models I bought. This is so I can better study what I need to improve.
When I got my comparison tool running for the first time, this is what I was greeted with! A bug with the step-height it seems. 😅 #ProcGen
@JulianOliver Discounting AI-hyped executives, most of us in the game industry find it awkward that the term we used since forever to refer to these innocent (but still often cool) processes, which are normally carefully crafted to produce consistent results, is now getting conflated with large models produced unethically and prone to bullshitting.
@JulianOliver I get the sentiment here. Due to hype, the term is used so widely that it's lost its meaning. But it's still ultimately better that most of these are based on simple cheap processes rather than the power hungry trained-on-stolen-data large models.
@JulianOliver In the context of the game industry, the term AI (or game AI) has been used for decades to mean simple processes like pathfinding and state machines to control the behavior of NPCs and enemies and give them an impression of autonomy. Anthropomorphization is good in this context, as long as its understood it's based on suspension of disbelief and not actual intelligence.
@sinbad We've gone a more low-tech route (would only fit some of your use cases, but fits ours nicely) of just buying light bulbs with multiple brightness/kelvin values (2 or 3) that can be toggled/cycled by quickly turning the light off and on again. No apps or special buttons needed for these.
It's a framework (C#) that can be used to implement layer-based procedural generation that's infinite, deterministic and contextual.
Nobody else have tried/tested it yet - if you're up for taking it for a spin, let me know how it looks; what's clear or confusing, if you think there's low hanging fruit improvements I could make, etc. #ProcGen
The value of layer-based generation is not just the implementation, but also a certain way to *think* about how to define spatial dependencies for large-scale generation.
I've put a lot of effort into the documentation and its illustrations (examples here), which explain the high level concepts of the framework as well as the details. https://runevision.github.io/LayerProcGen/
@glassbottommeg@MereLeFey@dtgeek Ah sorry I see Tusky is Android. There's no data for Android apps at the moment, but if someone was to gather the data, I'd be happy to add it to the table.