@Inginsub They didn't even rewrite it. Someone from the ffmpeg team claimed they just transpiled it, probably with LLMs. And then made a bounty for figuring out why it's slower, because according to the Rust "porters" it should have been much faster.
@dcc@Inginsub I also like that the documentation says that you have to write unsafe code in some specific ways to ensure that it won't leak and that it will even work.
Meanwhile in C land you malloc and then you free when you need. Or in C++ land, you can use smart pointers and do all the memory work in constructors/destructors and everything will get magically cleaned up. No need for memory management in actual program logic when its done properly.
It's ugly and non-portable. I hate all the stupid __attribute__s.
While we're on the topic of portability, I also hate all of the extra decorative shit added to function declarations that you have to #define away when you are trying to use literally any other compiler or even if you're using gcc but their configure script is broken.
@p@dcc@Inginsub@Suiseiseki On the topic of sugar I hate: If you want to pass the array first and the length afterward, you can use a forward declaration in the parameter list—another GNU extension.
@Suiseiseki@dcc@Inginsub@p gcc is a shit compiler and still would be shit if Apple didn't invest a lot of money into it. The original had to be forked to even be able to progress and in a similar situation the default binutils linker is also a slow meme. gold is _dead_.
>The second slowest C compiler. It's extremely fast, it just does a lot of checks and optimizations - you can disable the optimization and it's faster.
>Partially written in C++. Sucks, but not a problem, as a previous version that implemented the permitted subset of C++ in all C.
>Not written by Ken Thompson. Yes, it was always free software and was and is never proprietary software.
>Internally, the largest possible mess. Internally it is quite well structured, the number of features is just pushed to the limit.
>Extensions that solve half a problem that no one has. The extensions provide many things lacking in C.
>Still treats ARM like a second-class architecture. It is a second-class architecture, but regardless it does support ARM32 & Aarch64 quite well.
>Rewrites your goddamn function calls. Based.
>Does all kinds of batshit-insane nonsense that nobody asked for, incurring a compile-time cost for breaking your code. Then don't write the software wrong or disable those optimizations flags then.
>The absolute worst possible cross-compilation situation of all time. Not really? Once you've got it to compile, you've got a cross-compiler.
>Breaks ABI between minor versions. Breaking any possible proprietary software between minor versions is a good thing.
>Cannot find crt0.s. I haven't seen that, but it doesn't seem to be an issue with GCC.
@dcc@Inginsub@phnt@Suiseiseki Even :rms: doesn't use HURD. Last I heard, he was suggesting Gentoo or Trisquel. If you're posting from a HURD machine, let me know. In the mean time, this post was written on a full GPL system, built with God's own kencc. :9front:
Did apple develop much in GCC other than the Objective-C frontend (which was originally developed by NeXT)?
>The original had to be forked to even be able to progress There was progress, but the development speed wasn't fast enough for a group's liking.
>the default binutils linker is also a slow meme. I tested the linking speed and linking is quite fast and does not slow the compilation process by more than a handful of seconds - which does not matter.
At least the linker is free software no matter where it is.
I replied to Pete, because he doesn't more than look for a reason to spew retarded answer wrong propaganda. You fucking idiot. Fuck your mother for feeding you.
@Suiseiseki@dcc@Inginsub@p >Apple Nigga, gcc was the default until Mac OS X 10.7 which was released in 2011. They mostly stopped working with them when they switched to GPL3 in gcc4.3 I think (2008). And the reason why gcc became usable around the death of NeXT is also because of Apple. Before that it was unusably slow and also the reason why NeXTSTEP was also slow.
You wanna compare line counts between gcc and tcc?
> It's extremely fast, it just does a lot of checks and optimizations - you can disable the optimization and it's faster.
You've never used a fast compiler. Give it -O0 and then try kencc. gcc -O0 is *slower* than tcc and kencc.
> but not a problem,
It indicates that the project's technical direction has slipped.
> as a previous version that implemented the permitted subset of C++ in all C.
Let me tack this onto the list of things that make gcc the worst compiler that I have to use.
> Yes, it was always free software and was and is never proprietary software.
A legal distinction that does not matter. kencc has always had a relatively permissive license and the version included with Inferno was GPL for years before the Plan 9 version was GPL'd.
> Internally it is quite well structured, the number of features is just pushed to the limit.
$ man gcc | wc 25590 144392 1237729
Bloat.
$ ssh rex man gcc | wc 28759 158314 1369922
And it's only getting bigger.
> It is a second-class architecture,
I am vomit.
> Then don't write the software wrong
I never have. gcc has decided to fuck with me in new and exciting ways. The gcc authors can -fomit-frame-pointer my goddamn dick.
> or disable those optimizations flags then.
If I've got to write 2kB of options to make the compiler act like a real compiler for a real OS, then the compiler is a failure. If the compiler *accepts* 2kB of options, the compiler is a failure.
> Not really?
Yes, absolutely terrible. It is actually easier to use qemu-binfmt and a cross-arch chroot than to attempt to cross-compile a system using gcc.
> Breaking any possible proprietary software between minor versions is a good thing.
I have to maintain the distcc cluster's gcc versions in lockstep because the authors cannot keep their shit together.
> I haven't seen that, but it doesn't seem to be an issue with GCC.
Because you never attempt to statically link a program so that the GNU/Clusterfuck is able to keep up.
I attempted running "tig" in ~/src/gcc when I wrote "gcc authors" up near the beginning of this post and it still has not presented me with a screen full of commits so I'm not going back and changing it. gcc is bloated and barely functions and its codebase is like a shantytown on pontoons in the middle of the ocean.
@phnt@dcc@Inginsub@p Yes, apple happened to use gcc as the default compiler and made improvements, but that does not mean apple was the sole reason GCC existed.
I doubt it had much to do with them changing to GPLv3-or-later, as that gives *more* permissions that GPLv2, and the GPLv3 actually *permits* tivotization for commercial-only hardware, unlike the GPLv2, which totally forbids tivotization (copes incoming from those who have not read the GPLv2).
Apple saw that there was work done by some people on a GCC backend and it was possible to get that backend under a weak license and then they encouraged/funded a frontend under a weak license, resulting in a functionally acceptable compiler that could be rendered proprietary and used as a yoke of unjust power to take the users freedom when deemed suitable.
Just because something is inconvenient because it is slow doesn't make it unusable.
@p@dcc@Inginsub@Suiseiseki Anybody that tells you gcc's internal structure isn't bad never touched GCC internals. Everything in it, even the build system, is a clusterfuck of duct tape and bad programming practices. Codegen and optimizer are especially bad.
Now watch Suiseiseki spin this into, you aren't ready for the pure genius of GNU developers, that's why you can't comprehend the pure genius of gcc's codebase.
What GNU browser are you using to read and reply to this? Be consistent and uninstall it. A Jehovah's Witness for shit software you don't even use. Suicide, retard.
@p@dcc@Inginsub@Suiseiseki >cross-compiling So far the NetBSD build system is the only project that succeeded at cross-compiling itself and gcc without much hassle. And if that somehow breaks, you have another chance of LLVM.
> Everything in it, even the build system, is a clusterfuck
I remember the 2.x days when it was basically impossible to get gcc to compile the next version of itself. Apparently it is not as screwed up as it used to be, they ironed some of that out, but it's still a mess, a complete mess.
@p@dcc@Inginsub@Suiseiseki gcc 2.x were by far the shittiest versions. gcc3.x was better, but the build system was still half broken. gcc4.x is the first version series I consider usable and usable means that it works good enough and I don't want to throw the computer out the window when I have to use it.
It's also the version every BSD is mostly stuck at due to GPL spergery.
@p@dcc@Inginsub@Suiseiseki Plan 9 is very nice for cross-compiling yes. I've built my RPi image on a VM in like 3 minutes and all it takes is setting the arch thing.
@p@dcc@Inginsub@Suiseiseki FreeBSD and OpenBSD switch to clang :D NetBSD still uses the old gcc by default and I don't know about HardenedBSD and Dragonfly.
@dsm@dcc@Inginsub@Suiseiseki@p Imagine auto-updating licenses. It's like automatic system upgrades. You need absolute trust that they won't fuck something up.
> For emacs to run on GRUB OS, you would have to make it support paging memory to disk. For obvious reasons.
mushi% grep tb@becket /sys/games/lib/fortunes If emacs buffers were limited to the size of memory, it would not be possible to edit /dev/mem. -tb@becket.net
I'm back to my original position: Plan 9 is the good OS, I'll just use that when I want to use something good, and Linux is my Windows, the OS I have to run to have compatibility with normal software.
@bonifartius@dcc@Inginsub@phnt@Suiseiseki acme has no configuration files. If you want it to do something different, the source code is right in /sys/src/cmd/acme and is only 13k lines and it only takes 30 seconds to find the behavior you want to change.
@theorytoe@dcc@Inginsub@Suiseiseki@p For Windows definitely. In terms of compiling speed and quality of optimizations they are mostly the same. The differences mostly disappeared when clang learned how to auto-vectorize.
But LLVM is pure C++ so compiling it takes an eternity.
>Warning: this statement fall through >Warning: this statement fall through >Warning: unrecognized pragma >Warning: this statement fall through >Warning: unrecognized pragma >Warning: unrecognized pragma
@waff@dcc@Inginsub@Suiseiseki@p It gave me brainworms that made me realize that like 80% GNUware actually sucks. So far the only GNU project that didn't give me a headache is libiconv.
> It is what is expected to happen, but in cases where it is not intentional, confusing bugs result.
An alarm that *always* goes off is worse than one that never does.
It's more confusing for the compiler to do something that you can't see on the page when you look at the code. Turning printf("%s\n", s) into puts(s) introduces side effects like two syscalls where there was one.
This is the lesson that software authors--especially compiler writers--fail to learn, over and over and over again: the two things that make software actually useful are responsiveness and simplicity (as in "Do what I said and don't second-guess me"). People freak out if they don't know what the machine is doing and "Trust me, I made it do the right thing even if you tell it to do the wrong thing" just makes them mistrust the software because they can't build an intuitive model of how it works. Outlook's "priority inbox" is never used by its intended audience, the regular office people that sit at a desk and get to "their google" by clicking the G on their Windows machine: they don't know how it's going to sort the mail and they think it's actually easier to do manually. People don't look at confirmation prompts.
But especially if I am using a precise tool--a programming language--I want to not have it run through the goddamn fuzzy filter before it gets to the machine. The compiler's goddamn job is not to decide for me whether or not I am confused: I'm not confused, but I *will* be if the idiot machine decides to stop doing what I said and go do something else. The problem with the goddamn proprietary software is THIS EXACT THING.
> If the fallthrough is correctly documented with a comment,
"Please fill out the proper forms." I just turn the warning off. It used to be considered bad form to slather your code in idiot shit to placate a compiler; it makes the code unreadable. gcc practically requires it. It's not a goddamn C compiler.
> If you have found a segfault, you should report that segfault and how to reproduce it so it can be fixed.
If I have one, it is already fucking up my day. I intend to hack around it and get to the next segfault and repeat the process until the task is done. If it hasn't already cost me hours of time, then maybe I try to locate the problem with the code and then, if the code is even legible instead of being covered in nonsense produced to placate the compiler, maybe I can locate the place where I should report it and then they can tell me to reproduce it in another version. Some of us have to make a living, though.
> I always have all the source code, thus I can freely dynamically link without problems.
And you always have exactly one version on every machine and you never have to bootstrap a dev environment on the only version of some bullshit version of some bullshit distro because that's what is already on the machine.
@p@dcc@Inginsub@Suiseiseki >compilers >second-guessing The other side of the coin are people that don't know what they are doing, think they know what they are doing (the most dangerous people). These people keep trying to outsmart compiler optimizations and 99% of the fail to do so.
At least from my point of view that is why compilers turned aggressive with optimizations, to silence these people that complain about performance while writing bad code and while trying to outsmart compilers. That might be why compilers turned to rewriting whole functions, creating multiples of the same functions for different branches,...
The other reason might also be the increasing usage of C++ which literally is optimizer abuse. The first-pass code from codegen that gets generated is horrific. That's also why modern C++ compilers have three stages of optimization before even generating machine code.
> Ahh, I forgot that you had issues when interacting with it from Ruby.
I've used the iconv tool a lot, too.
> Input validation is for babies.
Yeah, I feel like...if I have to reimplement half of the library's functionality just to avoid making the library segfault
---HEY IT JUST FINISHED, SEE BELOW
...then the library is useless and I may as well just do it myself.
> glib also likes to bail out on mundane memory errors instead of trying to deal with them. No, that's hard for feet devs. abort() is much simpler.
God help you if you turn off overcommit but still want to use something that prefers to gobject a gstring instead of just doing `char *` like a normal person.
I was making some sort of point about gcc being bloated at some point earlier in the thread and I decided to do `git pull` and while I was waiting for that I typed `git gc` (after first accidentally typing `go gc`) but that took an hour and then the gc took most of an hour. It took so long that I typed `history` to get an informal benchmark, but it took so damn long that the thread moved elsewhere by the time this process finished running.
@sendpaws@dcc@Inginsub@Suiseiseki@p There is no magic in msvcc. That's the problem with that statement. Optimizations in that compiler simply don't exist.
> These people keep trying to outsmart compiler optimizations and 99% of the fail to do so.
Yeah. The dude that puts "register" in front of every variable declaration.
> At least from my point of view that is why compilers turned aggressive with optimizations, to silence these people
I think it's because they want the code to be faster and once you run out of shit to do, you can say "Well...it's undefined behavior if you do integer overflow so we can just assume that integer overflow never happens!" and they can continue fiddling with shit that wasn't broke until they "fixed" it. It's exciting to make code go 1% faster and it is not exciting to make the codebase cleaner and easier to read.
> The other reason might also be the increasing usage of C++ which literally is optimizer abuse.
I don't think it should be the same compiler. They can't make the same compiler produce binaries for two different architectures but they can make it accept source for 30 different languages. "Maybe some languages *shouldn't* be added to the gcc monolith" guy gets thrown out the window.
I opened the github mirror and went into the gcc directory and my fucking fans started spinning. Opened two random files in root and they were both 3,000+ lines.
Holy shit.
>109 line function
I'm not going to pretend to be a programming expert and I'm probably the least qualified in this thread -- I dwell in the land of web development and high-level languages -- but... it does reek a bit.
> I opened the github mirror and went into the gcc directory and my fucking fans started spinning. Opened two random files in root and they were both 3,000+ lines.
Many such cases.
> it does reek a bit.
It's massive. It shouldn't be. It's too big for anyone to understand, and when that's the case, it shouldn't be one program.
@phnt@Kirino@Inginsub@Suiseiseki@dcc@waff You can just `find -type f -name '*.c' -print0 | xargs -0 -P30 -n1 indent -kr -i8 -l80 -nsaf -nsaw -nsai -nbbo -ncs -nsc -nfca` and undo the bullshit cutesy half-indented braces and all the weird bullshit ASCII-art that goonoo thinks code should have.
#ifndef __YOU_HAVE_READ_THE_THREAD #error "Having to dump shit to tell gcc that you plan to fall-through the cases in a switch is nonsense. Go read the thread." #endif
@p@dcc@Inginsub@phnt@Suiseiseki yeah, i have used acme for a while, i really like the way the mouse is used and the plumber etc. unfortunately the whole acme idea works best on a real plan 9, not on unix. at least for me there's too much friction between the parts.
I follow a mixture of uncle bob's clean code and (try) to follow nasa's specifications. I find writing code to be easiest for me to follow and maintain in this way.
Namely:
(NASA) 1. All loops must have fixed bounds. 2. Restrict functions to a single printed page. 3. Restrict the scope of data to the smallest possible. 4. Compile with all warnings active; all warnings should be addressed.
(CLEAN CODE) 5. Follow standard conventions. 6. Reduce complexity as much as possible. 7. A class should know only its direct dependencies. 8. If you do something a certain way, do all similar things in the same way. 9. Choose descriptive and unambiguous names. 10. Replace magic numbers with named constants. 11. Functions should prefer fewer arguments. 12. Don't write redundant comments.
(NOT SURE BUT IT'S ALWAYS HELPED ME!) 13. The software should be modular, I should be able to rip out an entire library / module and have the rest still work. 14. Cancel out of a function as quick as possible (avoid if/else and prefer if (return) return.
There are others I try to follow but they're less explicit and not things I actively think about.
@bonifartius@p@dcc@Inginsub@Suiseiseki I like acme as an editor. It has interesting ideas just like many aspects of Plan 9. It's probably the only "IDE" I don't hate using.
BUT the same thing kills Plan 9 without hacking on it a lot, which is something I'm planning on doing, is the mouse-centric UI kills acme for me. I already have issues with my wrist and constantly switching from my mouse to keyboard and back hurts after like an hour. Which is why I mainly use nvim for everything and only use editors that support some kind of vi-mode.
>Now watch Suiseiseki spin this into, you aren't ready for the pure genius of GNU developers, that's why you can't comprehend the pure genius of gcc's codebase. Well, at least I was partially right.