Conversation
Notices
-
Embed this notice
Pissed Hippo (sun@shitposter.world)'s status on Saturday, 14-Sep-2024 05:45:25 JST Pissed Hippo
@meowski @white_male I tried it and spread processing across 8 cores and got a 20 percent performance increase. Not worth it -
Embed this notice
white_male (white_male@poa.st)'s status on Saturday, 14-Sep-2024 05:49:11 JST white_male
@sun @meowski What problem are you trying to solve and in what language? -
Embed this notice
Pissed Hippo (sun@shitposter.world)'s status on Saturday, 14-Sep-2024 05:49:11 JST Pissed Hippo
@white_male @meowski I have a library that only uses one core and I want to use all 32 cores because I have thousands of files to process -
Embed this notice
white_male (white_male@poa.st)'s status on Saturday, 14-Sep-2024 06:08:16 JST white_male
@sun @meowski One of my datasets, 24 thread Ryzen loads and processes it in about 30 seconds. Raw c performance. Pissed Hippo likes this. -
Embed this notice
white_male (white_male@poa.st)'s status on Saturday, 14-Sep-2024 06:08:17 JST white_male
@sun @meowski Sounds trivial, but python has another disadvantage, file IO operations are heavily crippled too. I'm guessing that's your bottle neck. I've had these a lot when i was trying python for such.
In c you'd do single process app with multiple threads pooling. No artificial bottlenecks to consider. -
Embed this notice
Parus Major (parusmajor@shitposter.world)'s status on Saturday, 14-Sep-2024 15:38:55 JST Parus Major
@sun @white_male @meowski just run the python script 32 times with gnu parallel. EZ. Pissed Hippo likes this.
-
Embed this notice