ketmar Posted October 5, 2021 7 hours ago, dpJudas said: For the majority of code the best way to avoid bugs is write the code in such a way that it becomes as close to mathematically impossible that it can fail. yeah. the best way to have bug-free code is to write bug-free code. ;-) 3 hours ago, AlexMax said: build-time CI for all supported configurations should be considered as a bare minimum for any project to catch silly mistakes that's why k8vavoom is built with all supported GCC versions. but we were talking explicitly about unsupported platforms. 0 Share this post Link to post
Gibbon Posted October 5, 2021 I have a few unsupported platforms for ReBOOM (mostly 32bit and some 64bit like aarch64 on SBC computers). Even then I test them to make sure they don't segfault but fail gracefully via overflows. I simply support too many platforms (compilers are always the latest, with the previous being dropped) to have any automated CI doing it. But since mine are a lot less complicated than something like Odamex, k8vavoom or GZDoom, I can get away with it most of the time since even the simplest manual scenarios take hours to do over 5 operating systems. 1 Share this post Link to post
chungy Posted October 5, 2021 2 hours ago, ketmar said: k8vavoom is built with all supported GCC versions Your list of GCC versions that you support building k8vavoom with ends at the newest unsupported upstream release. Something about that is highly ironic. 0 Share this post Link to post
Graf Zahl Posted October 5, 2021 4 minutes ago, chungy said: Something about that is highly ironic. So is "32 bit x86 Linux" as the only officially supported platform. Which means that "support" is restricted to obsolete platforms. :P This is precisely the platform all computing will move *away* from! In all seriousness, I can understand why doing this properly can be cumbersome and annoying, but in the end it is clearly worth it because the 3 "big" compilers vary sufficiently in the emitted error messages, that the combination of all 3 is far more likely to catch a programming mistake than relying on a single one, and then on an outdated architecture. Also, these days there's more than enough options to get CI virtually for free with either Github's integrated workflow or Travis/AppVeyor, so in the end it's a one-time investment that clearly pays off. 1 Share this post Link to post
ketmar Posted October 5, 2021 2 hours ago, chungy said: Something about that is highly ironic. and somehow me using 32-bit system isn't? ;-) 2 hours ago, Graf Zahl said: that the combination of all 3 is far more likely to catch a programming mistake than relying on a single one tbh, i highly doubt so. but what you definitely will have to do is to please all 3, because each one of them has a different view on what is the "right code style". i already have some nonsense in my code to force GCC to shut up, and adding two more compilers will mean adding more nonsence to silence them all. ;-) and one of those doesn't even have a proper GNU/Linux version. i once tried to make Visual Studio work with Wine, and i don't want to try that again, ever. ;-) 0 Share this post Link to post
Graf Zahl Posted October 5, 2021 1 hour ago, ketmar said: tbh, i highly doubt so. but what you definitely will have to do is to please all 3, because each one of them has a different view on what is the "right code style". Yes, you will, but you also got the benefit of more thorough code checks 1 hour ago, ketmar said: i once tried to make Visual Studio work with Wine, and i don't want to try that again, ever. ;-) That's where automated CI would come in to save you from the trouble. 1 hour ago, ketmar said: and somehow me using 32-bit system isn't? ;-) Of course it is. 32 bit is on the way out as a development platform. 64 bit is the future. And especially with x86 the performance hit of having less registers is very noticable in most code that is performance sensitive. 0 Share this post Link to post
ketmar Posted October 5, 2021 2 minutes ago, Graf Zahl said: That's where automated CI would come in to save you from the trouble. i prefer to not rely on external services for my development. 3 minutes ago, Graf Zahl said: And especially with x86 the performance hit of having less registers is very noticable in most code that is performance sensitive. oh, i'm hearing that time and time again… and somehow nobody ever mentions heavy cache hits from double pointer sizes. x86 in 32-bit mode is quite good at register shadowing, and even spilling to stack is cheaper than cache misses from increased structure sizes. 0 Share this post Link to post
Graf Zahl Posted October 5, 2021 The impact from double pointer sizes is mostly irrelevant. All code in GZDoom I profiled is faster in 64 bit than in 32 bit, with the most signficant difference in the software renderer's drawers. The added registers allow significantly better optimization of code and that does improve performance. You are simply getting hung up on a non-issue here. For me, in GZDoom it didn't even matter if I used floats or doubles for all values in the main structs - performance difference was zero. AActor alone has 64 floating point member values, making those floats would shrink the size by 1/5th. 0 Share this post Link to post
ketmar Posted October 5, 2021 (edited) k8vavoom doesn't have tight loops where this could matter. and most of the time it is executing VM code, where each opcode is quite simple, and all registers are flushed on each new opcode. addinig even infinite number of free registers won't improve that case, but doubling pointer size immediately affects all hash tables, for example, and there are a lot of them (those are Robin Hood hashes with continuous bucket area, and each item holds a pointer to the actual data). the same VM argument could be applied to basically any non-JIT VM implementation out there. also, with 32-bit pointers, you can safely use nan-boxing in VM code, but with 64-bit pointers you have to rely on the assumption that no arch/OS out there will use more that 53 bits of pointer; even less, if you want to play safe and be fast). that is, 64 bit arch is not magically improves every possible use case, and there are common cases when it is at least not better than 32 bits. most of the code i'm using daily is either limited by I/O or GPU, or executed in some kind of VM. so i see no reasons to spend my time trying to transform my perfectly working system into… a perfectly working system, i guess? "everybody's going 64 bit" is just a hype, there is no need to blindly jump into that bandwagon. Edited October 5, 2021 by ketmar 0 Share this post Link to post
Gibbon Posted October 5, 2021 Well I definitely like this conversation, it is a side of source port development that isn't really 'public'. I'm probably the elephant in the room here as mine contain a mix of floats and doubles (yay legacy). Though, for standards sake and for cross platform use, I'm switching them all to floats, I see double the same way as I see long long. It is perfectly fine, but is a legacy that can be cleaned up by using modern standards. 0 Share this post Link to post
ketmar Posted October 5, 2021 (edited) yeah, in Doom codebase floats are usually more than enough. it may become problematic with some huge maps (i.e. with a bounding box almost as big as the [-32768…32767] playing area), but i bet Vanilla will have problems with such maps too. k8vavoom is slightly more sensitive to fp precision (because it is a freestep), but even for it floats are more than enough. the one exception where k8vavoom code is using doubles is `Sys_Time()`, which works like `GetTickCount()`, only returns current "tick" in seconds. but it is almost immediately converted to time delta, which is stored in float too. p.s.: tangential note: second k8vavoom developer is using 64-bit system, and newer compiler, so k8vavoom is actually "64-bit clean". Edited October 5, 2021 by ketmar 0 Share this post Link to post
Graf Zahl Posted October 5, 2021 The issue of floats vs. doubles is not really value range but precision. A float has less fractional precision than a 16.16 fixed point number and this can indeed cause occasional problems. The main reason why GZDoom used doubles is something else, though. At the time the conversion was done we still thought that x87 code still mattered and x87 does not have any sane implementation for multi-precision value types, so to ensure consistent behavior the compilers have to write out/read back the numbers constantly. Single precision floats on x87 with VS's precise mode are a heavy performance drag that far exclipsed the effect of the CPU cache. With SSE2 math this entire consideration is mostly irrelevant - but when I shortly afterward changed all doubles in AActor and sector_t to floats, the performance difference was not measurable so I left it as doubles for the better precision. 1 hour ago, ketmar said: k8vavoom doesn't have tight loops where this could matter. and most of the time it is executing VM code, where each opcode is quite simple, and all registers are flushed on each new opcode. addinig even infinite number of free registers won't improve that case, but doubling pointer size immediately affects all hash tables, for example, and there are a lot of them (those are Robin Hood hashes with continuous bucket area, and each item holds a pointer to the actual data). the same VM argument could be applied to basically any non-JIT VM implementation out there. also, with 32-bit pointers, you can safely use nan-boxing in VM code, but with 64-bit pointers you have to rely on the assumption that no arch/OS out there will use more that 53 bits of pointer; even less, if you want to play safe and be fast). Well, in GZDoom the 32 bit VM interpreter is considerably slower than the 64 bit version, because it has to do a lot more register spilling to the stack. But it does not use any such pointer tricks at all, the compiler tries very hard to get as many lookups as possible resolved in the compilation path (especially class descriptors and sound indices) and outputs a byte code that's very CPU-like and low level. But even so, have you actually profiled this case or just made an educated guess from your use case? I'm asking because I never experienced such a thing - in all cases I checked the lack of registers in 32 bit mode easily made a bigger difference. 1 Share this post Link to post
ketmar Posted October 5, 2021 2 minutes ago, Graf Zahl said: But even so, have you actually profiled this case or just made an educated guess from your use case? I'm asking because I never experienced such a thing - in all cases I checked the lack of registers in 32 bit mode easily made a bigger difference. not on exact k8vavoom code, but i did that for several stack VM implementations before. they aren't much different from k8vavoom VM (actually, they were doing even more complex tasks in their opcodes, and they were using the same "objects scattered across the whole RAM" model). having more registers will definitely make a difference for JIT, but i don't have JIT yet. otherwise, free registers don't help much, due to heavy cache misses (Valgrind helps to see that). VavoomC codegen simply spits out the classical stack VM opcodes, it doesn't try to optimise things (well, beyond very basic constant folding). so basically the only thing that can be shared and kept in the register is stack pointer, and the VM often has to put it back into variable anyway to perform method calls. 1 Share this post Link to post
LexiMax Posted October 5, 2021 9 hours ago, ketmar said: that's why k8vavoom is built with all supported GCC versions. but we were talking explicitly about unsupported platforms. And I'm explicitly continuing the testing conversation in the context of supported platforms. Even if that list is very narrow and only pertains to the compiler you happen to use on your personal computer, it's still worth it to use build CI. 0 Share this post Link to post
dpJudas Posted October 5, 2021 3 hours ago, ketmar said: "everybody's going 64 bit" is just a hype, there is no need to blindly jump into that bandwagon. Steam's hardware survey shows that only 0.3% of the users got a 32-bit OS. This 64-bit thing is total hype for sure. 1 Share this post Link to post
Gibbon Posted October 5, 2021 (edited) Last time I checked, the CI doesn't cover the less popular OSs though. What if you wanted CI on OBSD or NetBSD? Doesn't it just have Win/Mac/Ubuntu? I mean sure, it is something but it isn't everything that is possible. Thats why I collected my own hardware and do it, because I have full control over the state of the system at install time. Though I get that most people aren't strange like me who have turned their house into the Matrix :) Edited October 5, 2021 by Gibbon 0 Share this post Link to post
Gibbon Posted October 5, 2021 Double post, but this is an update. GZDChex3 for Windows64 bit is now on the list too. 0 Share this post Link to post
chungy Posted October 5, 2021 7 hours ago, ketmar said: and somehow me using 32-bit system isn't? ;-) "I only support unsupported GCC releases" is actually ironic, whereas only supporting 32-bit builds is more often just an inconvenience (it'd cause, at most, the user to install 32-bit support libraries on what's most likely to be a 64-bit system). 0 Share this post Link to post
Graf Zahl Posted October 5, 2021 5 hours ago, ketmar said: that is, 64 bit arch is not magically improves every possible use case, and there are common cases when it is at least not better than 32 bits. most of the code i'm using daily is either limited by I/O or GPU, or executed in some kind of VM. so i see no reasons to spend my time trying to transform my perfectly working system into… a perfectly working system, i guess? "everybody's going 64 bit" is just a hype, there is no need to blindly jump into that bandwagon. 'I missed this part in my last post, but well... The currently running GZDoom survey reports a mere 1% of users on a genuine 32 bit system, down from 1.5% two years ago. So yes, it is a tiny, shrinking minority. The number of 32 bit Linux users is 1 - that's one user, not one percent! Not as tiny as XP when it had to be dropped (which was 0.15%) but still well below the threshold where I still care. This is why my focus is solely on 64 bit. The 32 bit build for 4.7.0 was merely done to give the low end users a one-time chance to use the new GLES backend 1 hour ago, dpJudas said: Steam's hardware survey shows that only 0.3% of the users got a 32-bit OS. This 64-bit thing is total hype for sure. You know how it is: Many users of outdated technology do not realize how far behind the curve they often are! We get the same now with Windows 11. Most assumptions about incompatible systems totally overestimate their numbers. With 32 bit the big question will be when support in forward moving OSs starts to suffer. I don't expect it to go away any time soon but I wouldn't discount the possibility of performance optimizations of 64 bit mode in the CPUs at the cost of lower 32 bit performance. Just imagine something like Intel's upcoming CPUs with performance cores and economy cores, but only the economy cores still supporting 32 bit, so that the performance cores can focus on modern software written for them and be slimmed down a bit. In a way such a thing would make sense, considering that most 32 bit software is old and can make do with lowered performance. 1 Share this post Link to post
ketmar Posted October 6, 2021 (edited) @dpJudas, @Graf Zahl thank you for supporting my theory of "hype bandwagon" with the factual evidence. that's exactly what i said: "we need to move to 64 bits!" "why?" "because everybody's moving!" this is exactly what hype is. or do you want to tell me that most users know the differences between architectures, can evaluate tradeoffs, and make an educated decision if they want it or not? and no, "forced upgrade" is not the same as "educated decision". 12 hours ago, chungy said: "I only support unsupported GCC releases" is actually ironic too lazy to build each new shitgcc version, so that's what i already have built. (actually, up to gcc9, 8 was a typo.) i stopped upgrading my system gcc at 6 — that's where i got bored playing the idiotic game: "guess if they broke some C++ ABI in this major release or not". Edited October 6, 2021 by ketmar 0 Share this post Link to post
ketmar Posted October 6, 2021 15 hours ago, AlexMax said: Even if that list is very narrow and only pertains to the compiler you happen to use on your personal computer, it's still worth it to use build CI. for what exactly? because "everybody's using it?" been there, seen that with 64 bits. still not convinced. 0 Share this post Link to post
Gibbon Posted October 6, 2021 (edited) 37 minutes ago, ketmar said: @dpJudas, @Graf Zahl thank you for supporting my theory of "hype bandwagon" with the factual evidence. that's exactly what i said: "we need to move to 64 bits!" "why?" "because everybody's moving!" this is exactly what hype is. or do you want to tell me that most users know the differences between architectures, can evaluate tradeoffs, and make an educated decision if they want it or not? and no, "forced upgrade" is not the same as "educated decision". too lazy to build each new shitgcc version, so that's what i already have built. (actually, up to gcc9, 8 was a typo.) i stopped upgrading my system gcc at 6 — that's where i got bored playing the idiotic game: "guess if they broke some C++ ABI in this major release or not". You don't have a package manager? I smell a Slackware user ;) But about hype, well it has to happen eventually. I mean, the world ran fine on 16bit so why did we ever need 32bit? Eventually it'll just lead to stagnation, I find it exciting when new stuff is ready to be used. I was in there at the beginning 13 years ago and dropped 32bit the following year. Edited October 6, 2021 by Gibbon 0 Share this post Link to post
ketmar Posted October 6, 2021 (edited) 53 minutes ago, Gibbon said: You don't have a package manager? I smell a Slackware user ;) 1. wrong. 2. right. ;-) i have a lot of custom-built software (including libs, of course ;-), and i don't see anything so valuable in newer gcc versions to justify "gcc russian roulette". since gcc switched to "bunny hopping versioning", there is no easy way to know if they broke C++ ABI or not. 53 minutes ago, Gibbon said: I mean, the world ran fine on 16bit so why did we ever need 32bit? there was a huge difference between segmented model with 64K pages (ok, it was more complex for 8086, but let's not dive into that hole ;-), and flat 4GB addressing. there's no such difference for 32-vs-64. i still have to find any software that needs more than 3GB of RAM. i'm not talking about things like video processing and such, only about my use cases, of course. so far the two reasons that should make me switch are: "but everybody's going 64!" (so what?), and "oh, more registers, so may be your software — that spends most of its time waiting for I/O — will wait for I/O more efficiently!" (yeah, sure, totally worth it). don't take me wrong, i'm not absolutely against 64 bits per se — such architectures have their uses. but i am against blindly jumping into the wagon because "everybody rides it, so you have too". Edited October 6, 2021 by ketmar 1 Share this post Link to post
Graf Zahl Posted October 6, 2021 1 hour ago, ketmar said: @dpJudas, @Graf Zahl thank you for supporting my theory of "hype bandwagon" with the factual evidence. that's exactly what i said: "we need to move to 64 bits!" "why?" "because everybody's moving!" this is exactly what hype is. or do you want to tell me that most users know the differences between architectures, can evaluate tradeoffs, and make an educated decision if they want it or not? and no, "forced upgrade" is not the same as "educated decision". Wow, that's a strong load of bullshit. You are running into a very specific performance case with your port that hints at an architectural issue and generalize from that that 64 bit is just a useless hype? The truth is, you cannot fight innovation and technical evolution. The advantages 32 bit has in some very isolated use cases are very minor compared to being able to use more memory. Yes, everybody is using it. Why should "everybody" stick to outdated technologies that have fallen out of favor more than 10 years ago? By the same reasoning, should we still use Windows 95? XP was a "forced upgrade", so was Windows 7, 10 and now 11. 1 hour ago, ketmar said: too lazy to build each new shitgcc version, so that's what i already have built. (actually, up to gcc9, 8 was a typo.) i stopped upgrading my system gcc at 6 — that's where i got bored playing the idiotic game: "guess if they broke some C++ ABI in this major release or not". That's your problem. What you need to realize is that most people do not think like that so whatever seems right to you makes working with your port a genuine hassle. Let's make this clear: You are doing great work on k8Vavoom, but this constant "my way or the highway" attitude you show here is actually hurting your project and will surely do some long term damage to it if it does not change. Most of your users do not use the same hardware or software as you prefer - they use more modern things -, so you will inevitably be developing into a dead end if you continue on that path. Which would be a shame. 23 minutes ago, ketmar said: 1. wrong. 2. right. ;-) i have a lot of custom-built software (including libs, of course ;-), and i don't see anything so valuable in newer gcc versions to justify "gcc russian roulette". since gcc switched to "bunny hopping versioning", there is no easy way to know if they broke C++ ABI or not. Well, the inevitable outcome here will be that eventually you won't be able to compile more recent source code anymore, if that chooses to embrace more modern C++ standards. 23 minutes ago, ketmar said: there was a huge difference between segmented model with 64K pages (ok, it was more complex for 8086, but let's not dive into that hole ;-), and flat 4GB addressing. there's no such difference for 32-vs-64. i still have to find any software that needs more than 3GB of RAM. i'm not talking about things like video processing and such, only about my use cases, of course. so far the two reasons that should make me switch are: "but everybody's going 64!" (so what?), and "oh, more registers, so may be your software — that spends most of its time waiting for I/O — will wait for I/O more efficiently!" (yeah, sure, totally worth it). That's all very much irrelevant. All current operating systems are 64 bit, macOS has already dropped all 32 bit support some time ago, in Linux it increasingly becomes a hassle as the needed libraries are no longer included and with Windows 11 it has also started the final part of the 32 bit phase-out. And mobile platforms have already ditched it entirely. So even if you still got platform support, it is only a question of time until that gets reduced to token support for running older software through an emulation layer with some performance impact. 23 minutes ago, ketmar said: don't take me wrong, i'm not absolutely against 64 bits per se — such architectures have their uses. but i am against blindly jumping into the wagon because "everybody rides it, so you have too". Here's where your error in thinking lies: People do not "jump blindly onto a bandwagon". It's just that most software tends to standardize around up-to-date hardware technologies. Current PCs are equipped with 8, 16 or 32 GB of RAM, not with 4. They want to provide the ability for *single applications* to access that RAM if needed. They also need memory windows for other stuff, like memory mapped files, CPU visible GPU memory and so on. A 32 bit system is poorly equipped to do these things. Especially when doing memory mapped access file the 2/3GB barrier can very quickly become a crippling factor. Been there, done that, no fun working around the limitations. With 64 bit you can just map the file and be done. Of course the CPU performance issues can also not be discounted. It is not just the increased number of registers, but also being able to do 64 bit arithmetics with far, far less overhead. But in the end it comes down to the OS developers. What should they do? Performing endless double checking that everything they do works on both 32 and 64 bit? Or initiate an ordered transition to the more modern and more future proof standard? They clearly would prefer a single, universal standard here, not two. Also for installing an OS on a PC to be sold, there is no realistic way to have two options - they are virtually forced to install the modern, more forward-looking one. So, ultimately the same happened as in the early 90's: When 32 bit CPUs gained market dominance back then, it quickly pushed all 16 bit stuff out of the market with 16 bit merely becoming a fallback option in the OS that over time also disappeared. The same happened with 32 to 64 bit, but quite unsurprisingly it took a while longer because 32 bit was still "good enough" for many tasks. But it couldn't stop the gradual erosion of the market and will eventually lead to its demise, and it doesn't care one bit if you like it or not. 0 Share this post Link to post
dpJudas Posted October 6, 2021 2 hours ago, ketmar said: @dpJudas, @Graf Zahl thank you for supporting my theory of "hype bandwagon" with the factual evidence. that's exactly what i said: "we need to move to 64 bits!" "why?" "because everybody's moving!" this is exactly what hype is. or do you want to tell me that most users know the differences between architectures, can evaluate tradeoffs, and make an educated decision if they want it or not? and no, "forced upgrade" is not the same as "educated decision". You are funny. 64-bit hasn't really been hyped up much at all, but ultimately it doesn't matter. The market chose, 64-bit won and 32-bit is stone dead at this point. As for advantages, how about being able to actually load gigabytes of textures into your application without having to rely on shitty banking techniques? Or the improved security of address randomization that makes it harder to be hacked. What about never having to worry about memory fragmentation bringing down your application because it borders around 1 gigabyte of memory usage using very large blocks. I find it funny you want to stay on 32-bit for a theoretical speed improvement for a small set of programs while finding it perfectly OK to fuck any other type of application. If 32-bit truly was so superior the benchmarks in the world would have convinced especially gamers to stay on 32-bit. Yes, gamers are like that. If they get 10 fps better performance on 32-bit they'll not move. Yet they did. Go figure. 0 Share this post Link to post
dew Posted October 6, 2021 Since this is now a curated, pinned thread for non-standard port builds, I suggest this entire stupid conversation is split from it. It has been entirely hijacked by pointless bickering and egotistic grandstanding over why a port WON'T BE HERE. Graf, judas, gibbon, i understand you guys mean well, but you seriously need to stop indulging ketmar in making yet another thread about himself and his fetishes. 6 Share this post Link to post
Gibbon Posted October 6, 2021 (edited) 26 minutes ago, dew said: Since this is now a curated, pinned thread for non-standard port builds, I suggest this entire stupid conversation is split from it. It has been entirely hijacked by pointless bickering and egotistic grandstanding over why a port WON'T BE HERE. Graf, judas, gibbon, i understand you guys mean well, but you seriously need to stop indulging ketmar in making yet another thread about himself and his fetishes. You're right. I will keep my posts related to the topic. Programmers and conversation is an often, volatile mix :) So to get back on topic: Ports that won't be here are those which are provided by the developer, so there is no point in me doing builds for GZDoom since that project provides for the 'big 3' platforms themselves. For a smaller project or a project where the developer doesn't have the hardware but the port works on it, then those binaries will be provided. Tonight (CEST): Mac and Linux versions of the latest Woof release. While I can also provide FreeBSD and PowerPC64 versions, I won't due to the fact that users on those platforms are highly likely to be able to do it themselves if needed and the user base is probably in single digits, if at all. Edited October 6, 2021 by Gibbon 0 Share this post Link to post
Redneckerz Posted October 6, 2021 56 minutes ago, Gibbon said: You're right. I will keep my posts related to the topic. Programmers and conversation is an often, volatile mix :) Like i said before, maybe you need a new fresh set of targets ;) After all, you only went after this because i made a ridiculous statement full of compile-only ports. I wasn't thinking you would bite, let alone would go after them all in the span of a single month! :P Its tricky what else. I can imagine Windows ports of DOS-only ports but that can be a hit or miss kind of endeavour and obviously a ton more time. I recall you wrote somewhere a tiny list of prerequisities a port has to have before it is ported over. Like an ideal baseline of some sorts. Mind sharing that and adding it to your OP? 1 Share this post Link to post
xX_Lol6_Xx Posted October 6, 2021 @Redneckerz could you edit the crispy-doom's page of the Doomwiki, more specifically this? Looks like the link provided there is broken, so it'd be nice if it pointed to the GitHub page where they are :) 1 Share this post Link to post
Gibbon Posted October 6, 2021 (edited) And if you feel hesitant to do so, fear not, because these source ports will be maintained for a very long time. I don't abandon things :) Edit: As promised, woof 7.0.0 for Intel and Arm64 M1 Macs has been uploaded. Edited October 6, 2021 by Gibbon 2 Share this post Link to post
Recommended Posts