@hazlin No problem. I'm always willing to answer gamedev questions to whatever capacity I can. The more devs the more better, and it can sometimes be hard to find good, modern info on the subject.
Typically a bad idea, unless everyone in that group has proven by their historical behavior that they are incredibly dedicated and hard-working, and also everyone in the group has enough available time to work on the game.
dev-plan
You have it all upside down. Get something super basic running and then playtest it aggressively as you build, bit by bit. Like, start with moving a square around on the screen, then making it jump, then making it collide with other squares, then... and so on. Work on the feel of the game from day one, and download some free decentish art too. A game is no fun if it's no fun and you can't determine that without playing it. A lot. And changing it when it's not fun. No need for corpo powerpoints.
If you think this method sounds strange, remember that it's what Valve did back when they made games.
Ah. I see the situation now. Unity makes it really easy to get a model to walk around a plane, or whatever terrain plugin you use. After that it's a massive net productivity loss for software developers. Sure, it's great for artists who don't really understand any code, but it's an anchor chain around the neck of a developer. You have to do things Unity's way. You can't just change core data structures or engine loops willy-nilly. There's a massive complicated pile of junk that you don't need integrated into the parts you do need, regardless of which things you need. This gets in the way and slows you down and frankly is a demoralizing clusterfuck. This is why all the new indie games coming out are disjointed piles of plugins and store-bought assets. This is why the "release" of Manor Lords wasn't really that much improved from the early access demo. This is why Valheim development essentially stalled. (And they still haven't fixed the horrible LoD popping.)
To be clear, it's a problem of using an "engine", not a problem specific to Unity; you'd have the same issues on Godot, Unreal or Lumberyard.
Lean code lets you iterate fast. If I want to test out a new feature, I just write a quick function and hack it in to whatever existing location makes sense. If it works, great. If not, rip it out. When I run into performance problems with it, then I optimize it.
This comes at the horrible cost of having to learn OpenGL and how a GPU works conceptually and writing a bit of boilerplate. That's maybe 10% of a 2D game, especially if you use existing libs to do things like load meshes into memory. Even in 3D it's smaller than the gameplay part, though everything is more complicated in a 3D game.
Months, minimum. Maybe years. The path to being a game dev isn't short or easy, but it is rewarding.
One day I'll understand that, but it is too much for me at the moment.
It'll be too much for you until you take up the challenge and conquer it. The way you grow is by taking on difficult tasks.
I do really like the idea of a minecraft clone.
Me too. While you could use something like Godot for rendering, you're going to have to write the entire voxel system and inventory system from scratch, and that's the hard part. You'll also most likely have to write a bunch of custom shaders, mesh processing, and texturing code in order to efficiently render the voxels due to the great number of textures and the weird baked-in lighting.
And, I don't like minetest, it is super resource intensive for some reason.
Lua and an ancient graphics engine (Irrlicht).
Java vs C++
Minetest leans heavily on Lua. While luajit isn't the slowest thing out there, it's slower than Java. More importantly, the interface to the C++ is awkward and inefficient, and the semantics of Lua preclude writing efficient code (ie, tables instead of structs).
And, I still want to make a minecraft clone.
Do it. Use allegro for the boilerplate, per @icedquinn
It manages a lot of the fiddly annoying stuff for you while giving you the full power of the GPU. It's pretty easy to get a window up and a context attached with GLEW. There are some libs out there to do that stuff, but why? It's not particularly hard.
I also recommend using some good C data structure libs, like stb or my own sti: https://github.com/yzziizzy/sti Or you could write your own if you want the experience. Just make sure to learn and use the techniques from stb or sti in making typesafe generic data structures in C.
@vic@Dudebro@n3f_X@BowsacNoodle@Hoss So hack it. Or jam it. Wear a mask that looks like rubble and is lined with space blankets. Auto-aiming guns are cool and all, but not some sort of unbeatable menace. If there's a human in the loop then that loop can be broken. If there's not, then the computer can be fooled.
Better than "AI facial recognition" nonsense would be a turret with a thermal camera that just puts one .50bmg into anything appropriately warm that moves and is in the field of vision. Fewer false positives.
@mischievoustomato@ryanhe It would have been trivial. You could have had Mac Pro's with Threadrippers. Instead you get Mac Pros that can't even run two screens and have the performance of a mid-range Intel mobile chip, when the benchmarks are honest.
@lain "Fucking hell. This cheap-ass corporation couldn't afford an AI chatbot for their phone menu. They make you talk to a German. My day is ruined." 😆
@icedquinn@shrimp The constant.module.prefixing() seen in other languages is grotesque::and::unnecessary::visual::bloat().
C++ is sane-ish with "using namespace foo;", though few seem to use it. You can use "import * from foo" in some other languages too, but at that point you have the same annoyance as a header file, without the ability to reasonably control it at compile time with pre-declared macros, nor the ability to usefully *-import two modules with conflicting names, which they will have because people assume::you're::prefixing::everything so it's perfectly reasonable to make a top-level.function.called.copy() in every other module.
Another big problem with "modules" as a concept is that they aren't very flexible. They're monolithic. Either you break things apart into different modules, or they share namespaces, or you manually cherry-pick things. Into every file. You can't include just part of one or only some functions by only including "part" of the module. Headers merely inform the compiler about which functions and structs and global variable will be available from somewhere else so don't worry about it just compile this code and leave the rest to the linker. You can mix and match this however you want.
Remember, #include is part of the preprocessor, something which very few modern languages have. Headers also sequentially bring along their macros, which are extremely powerful.
@icedquinn@shrimp It's not a matter of power, it's a matter of control. You have include <string.h> and <math.h> because you are telling the compiler which strcmp and atan2 you are referring to.
Now I hear you say "but who would ever refer to different ones?" Me, in my current project. I'm using my own versions in order to not link against the CRT, among other reasons. And that's the difference between C and most other languages. You can do whatever you want, not just what the language designers thought of.
Besides, any significant project should have something like global.h with all the necessary standard includes, system or otherwise, which is included on the command line by the build system.
The main problem with this is the author decided that s1, s2, and s3 were good variable names. That happens in every language. It could use some more comments, and some of the sections are written in a somewhat old style (really? not using strndup?), but it's a perfectly normal parser algorithm that is perfectly understandable to anyone who understands basic C.
Anyone who struggles with this code probably doesn't actually understand pointers and memory.
@shrimp Header files are a good thing. Aside from implementation details that they fulfill, the header file system allows very careful control of scope and namespacing without becoming verbose and annoying. They also have the effect of concentrating they layout of your data structures in one compact, easy to find place. Both of these things become very important when you start writing large, complicated programs.
@icedquinn@hazlin A turing test is just some scifi nonsense about replicating human speech patterns well enough to fool a normie (something that is easy; politicians do it constantly and they're not even a form of slime mold). It has no significance technologically.
I don't think it will ever be feasible to replicate human cognition in a digital computer as we know them. Maybe a 5 kilo monolithic chunk of custom silicon, maybe. Probably not. Consider distance and signal propagation. It's not possible for anything the size of a warehouse to have the same latency as something the size of a melon, just based on the speed of electricity. And a melon isn't realistic. It's a fraction of that that handles cognition; most of your brain processes sensory input and handles biological things. Parallelism is built in to the hardware of your brain; in computers it's largely handled by sequential processors multiplying arrays upon arrays of numbers. Sure, you can add more cores, but not to the same effect as adding more dendrites. Your brain can effectively multiply enormous matrices in O(1).