Jump to content

opengl is much slower than software. !#$@#¨%*(&!@$!!!!!!


Guest

Recommended Posts

VGA said:

http://zdoom.org/wiki/CVARs:Display#Video_adapter
I never had to use those cvars and commands myself, but since it's relevant ...

Not even close to relevant.

Maes said:

I only owned a Win8 dual-GPU laptop for a short stint, and I recall that at least ZDaemon and GZDoom used the Intel HD, while getting them to use the "good" stuff required some fucking around with a special menu.

No fucking needed. You just needed to add an entry for GZDoom in the driver control panel. Since then, GZ now flags both discrete Nvidia amd ATI GPUs, so it's no longer necessary.

Share this post


Link to post
  • Replies 62
  • Created
  • Last Reply

Top Posters In This Topic

kb1 said:

But OpenGL works differently.


And with its own set of problems, most definitively, made even worse in systems that don't have dedicated video RAM (like it's probably the case with Intel-anything).

Since video RAM (and DMA accesses in general) are uncached, and in shared memory systems AGP/PCIe doesn't mean anything, you can imagine what this means when you have sprite overdraw and it's the card's hardware pulling the same sprite data again and again, and rewriting it back to main RAM. BUS CONTENTION AHOY! ;-)

Edward850 said:

Not even close to relevant.
No fucking needed. You just needed to add an entry for GZDoom in the driver control panel. Since then, GZ now flags both discrete Nvidia amd ATI GPUs, so it's no longer necessary.



Well, that was enough fucking for me ;-)

Share this post


Link to post
Maes said:

Well, that was enough fucking for me ;-)

Should probably note that the amount of alleged menu fucking I base that on was NVIDIA and its easy to use control panel. I'd hate to think how it worked with ATI, and thus it's probably for the best that said flags exist now.

Share this post


Link to post

Depends on the HD model, and whether you were running OpenGL or not. In software mode, Intel-anything hasn't really got any reason to be crap: it's all plain framebuffering no matter what you're using (and shared RAM might even be at a slight advantage here). But for anything OpenGL/Direct3D (or pretending to be so)...just nope.

I wonder what Intel's performance would be if they were coupled with dedicated VRAM...but AFAIK that might not even be possible: they are all designed to work exclusively off shared RAM. The only one that didn't was the ancient Intel 740/Auburn, but even that one stored textures preferentially in main RAM, hoping that the "advances" of the -then new- AGP connector would make video RAM irrelevant. Guess what...it didn't.

Share this post


Link to post
Guest

Iris Pro has EDRAM similar to the Xbox one, but I think the driver controls it, the programmer doesn't have access to it.

Is it feasible to convert the map to a pre compiled 3D mesh? Quake and newer engines have occlusion in all directions, there are rooms over rooms, but doom maps doesn't have rooms over rooms (excepts for hacks and bridges).

Share this post


Link to post
Naruto9 said:

Is it feasible to convert the map to a pre compiled 3D mesh?


Yes, and most hardware-accelerated 3D ports actually do so upon loading a map. However when dealing with low-end graphics cards or sprite-heavy maps, all bets are off.

Share this post


Link to post
Maes said:

Yes, and most hardware-accelerated 3D ports actually do so upon loading a map.



Name one which does and performs well.

The currently fastest GL ports, GlBoom and GZDoom only convert the flats to precompiled data. For walls it's not really feasible because they can change at any time in the game and the maintenance overhead easily nullifies any advantage this might bring.

Share this post


Link to post

Uhhhhhh.... I was under the impression that the various GLNODES lumps (when present) were exactly that, precalculated and complete mesh representations of a Doom map, and that ports with OpenGL/Direct3D support featured built-in node (re)builders just for that. Now, whether existing generators are optimal for the job and/or existing source ports do a good job in using that data, that's another story.

But please don't tell me that there are source ports which try to do a per-frame, on-the-fly mesh generation? O_o

Share this post


Link to post

GL nodes only define completely closed sub sectors, and additional vertices required to describe them. How a port translates that to an internal format is entirely up to the port author.

Share this post


Link to post
Maes said:

Uhhhhhh.... I was under the impression that the various GLNODES lumps (when present) were exactly that, precalculated and complete mesh representations of a Doom map, and that ports with OpenGL/Direct3D support featured built-in node (re)builders just for that. Now, whether existing generators are optimal for the job and/or existing source ports do a good job in using that data, that's another story.

But please don't tell me that there are source ports which try to do a per-frame, on-the-fly mesh generation? O_o




There is no such thing as a 'mesh' here. Most GL ports render a map sector by sector, wall by wall. Some put vertex data into a vertex buffer, most don't. While it brings some speed-up, the effects of vertex buffers is relatively minor.

The main problem with a 2.5d engine like Doom is that the ratio of vertexes per draw call is extremely low which means that modern graphics hardware cannot really pushed to its limits. As things stand, the most time is spent doing BSP traversal and visibility clipping, not actually rendering the stuff.

Share this post


Link to post
  • 3 weeks later...
Guest

Made a room and placed a player start. In such empty room there is no difference between software or opengl. Then I placed a hundred tourches and other deco in there, opengl is always slower in this case.

Large maps are a slideshow, 2-3fps in openengl even if all monsters are removed.

Share this post


Link to post

You can disable the torch and lighting effects, I think. It's those that eat up a lot of GPU horsepower, real quick. If you can live without the pretty colors, go ahead. One cannot expect to run e.g. nuts.wad with full bling and get no slowdown when the sparks start flying (heh, literally).

Share this post


Link to post
Guest

There is no way to run opengl at all. Unless the map is as simple as e1m1, entry or vanilla, it is not playable.

Any large map is totally unplayable. 5 fps without lighting, 3 fps with lighting.

In case of mods with doom 3 monsters and 3D weapons, it drops to 1 fps or less.

Share this post


Link to post
Naruto9 said:

There is no way to run opengl at all. Unless the map is as simple as e1m1, entry or vanilla, it is not playable.

Any large map is totally unplayable. 5 fps without lighting, 3 fps with lighting.

In case of mods with doom 3 monsters and 3D weapons, it drops to 1 fps or less.


time to buy a video card. id say get a 960 gtx. the again ive got an old 470 and it played doom fine. but youre going to have to spend some money like every one else.

on the bright side, most the stuff that people go to opengl for is possibly detrimental to the game. trilinear filtering just blurs the shit out of low res textures and high resolution just makes the game look dated by showing you stuff you could never see at 320x200.

having said that i usually play gzdoom with all the filtering off at 1080 since thats my native res. someday someone will add a feature to simulate 320x200 at 1080 :)

gzdoom is awesome.

edit: i just wanted to clarify. im not being a smart ass i really would like to see the 320x200 simulator that would be the most awesome thing.

Share this post


Link to post

GZDoom can play in 320x200, but modern graphics drivers do not have such a video mode avilable anymore.

It also never tries to draw to an offscreen frame buffer and then render that to the screen.

It wouldn't make much sense anyway because for hardware having problems with pixel fill rate on a Doom engine it'd not result in a speedup but more likely in a slowdown.

Share this post


Link to post

i was just trying to think of a way to have the game look like 320x200 while actually running at my native 1080, bc normal 320 is very blurry and shit looking with all the scaling and stretching the monitor does.

maybe every one pixel would draw as multiple identical neighboring pixels or something?

do you see what im getting at here? im terribly unclear sometimes. id just like to be able to view the game as the artists originally intended it to be, while hanging on to all the cool tricks gzdoom brings to the table (32-bit color, great audio options, perspective correct mlook, xinput, wad compatibility, etc).

Share this post


Link to post
Gez said:

Chocolate Doom does that.


youre right, thanks for the heads up! that is what im talking about.

my problem is that ive used gzdoom predominantly for years and change is hard lol. and i feel like id be losing some things to gain others.

Share this post


Link to post

im pretty sure i have but i will check it all out again.

iirc i have a choice between a tiny image in the middle of the screen and nasty looking scaling.

Share this post


Link to post
Maes said:

I wonder what Intel's performance would be if they were coupled with dedicated VRAM...but AFAIK that might not even be possible: they are all designed to work exclusively off shared RAM. The only one that didn't was the ancient Intel 740/Auburn, but even that one stored textures preferentially in main RAM, hoping that the "advances" of the -then new- AGP connector would make video RAM irrelevant. Guess what...it didn't.

There is a Broadwell variant that has 128MB of EDRAM on the CPU intended to help out. I think it provided a nice graphics boost, but I haven't seen a benchmark. When you weren't using the embedded GPU it would act as 128MB L4 cache which apparently provided an insane speedup - that CPU still beats the latest Skylake in CPU-bound games. People were annoyed with Intel for having no intentions of making a top-of-the-line Skylake equivalent.

The main issue hurting the Intel GPUs is that they have a very limited power/transistor budget compared to discrete GPUs. If you scaled the Intel design up to the same nVidia/AMD-level gigantic power-sucking beasts coupled with gigabytes of super-fast RAM they would probably be extremely competitive. I kinda wonder how good their drivers are (especially the OpenGL ones) but I have a feeling they're better than the AMD ones. I had to yank an AMD card from my new work PC because their POS drivers wouldn't render desktop applications correctly.

Share this post


Link to post

From experience I can tell that Intel's GL drivers have their own share of issues. While working on GZDoom I encountered some really, really odd things with the GLSL compiler which just refused to compile proper shaders.

Share this post


Link to post

For better or worse most of their effort probably goes into the Direct3D drivers.

I'm fairly pleased with how well the HD4000 handles things but GZDoom is probably the most modern thing I've thrown at it.

Share this post


Link to post
david_a said:

For better or worse most of their effort probably goes into the Direct3D drivers.

I'm fairly pleased with how well the HD4000 handles things but GZDoom is probably the most modern thing I've thrown at it.


well with consoles all being powered by amd, and games being designed around consoles, it would make sense that direct3d is going to be the target in the foreseeable future, right?

although doesnt amd have their own api now? mantle? or am i way off base.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...