Software Rendering Mode for OpenMW
Posted: 28 Jun 2023, 01:40
Hey, I was wondering if the addition of a software rendering mode for OpenMW would be possible?. You might be wondering why anyone would request such a feature, well it's primarily due to my research into retro PC games and how they were developed, and I believe there are several key advantages that C-based software rendering has, over the likes of OpenGL/Vulkan and DirectX.
Nowadays, an overwhelming majority of games and applications use hardware rendering through DirectX or OpenGL, we know this. The OpenGL or DirectX API allows programmers to use the graphics card to generate the 3D images on your screen. The DirectX and OpenGL interface is supplied by the graphics drivers of your graphics cards. This method of producing visuals is called hardware acceleration, or hardware rendering. When the graphics card is not involved in the creation of the 3D picture, the procedure is called software rendering (of course).
But In the 90s, we had no 3D acceleration at all. So on 386's, 486's, and Pentium computers, the games produced the graphics by software rendering only. So games like the original Tomb Raider had to rely on the CPU, and not some expensive graphics card.
As CPUs became faster, the quality of the software rendering increased. After we reached the 166MHz Pentium 1, 640x480 became playable with a few thousand polygons. This was enough for most of the games at that time, however, to keep the rendering playable, they had no texture filtering, and very few effects. Later, the S3 Virge graphics card and the 3dfx Voodoo 1 introduced, allowing filtered textures and faster 3D performance than software rendering. Of course, to access these capabilities, developers had to rewrite the code to support these chips. First every manufacturer supported their own proprietary API, but later the interfaces was sort of standardised. OpenGL and DirectX (Direct3D) compatible drivers became widely available. In 1997, they were able to get much as double the performance compared to software rendering with these early graphics chips. They offered more fps and better graphics quality, so programmers began to switch to hardware rendering as the primary method of producing graphics.
But what is the situation now?
Currently we have three DirectX variations available on Windows. DirectX9 is mostly used for legacy titles, and uses fixed function rendering with the support for shaders. DirectX 11 supports programmable pipeline only, and DirectX 12 is designed for highly parallelised rendering. These three APIs are totally incompatible with each other, so gpu-reliant developers have to write three separate implementation for each of these APIs. If a programmer decides to write a game for DirectX12, then he must also write a separate rendering engine for DirectX9 as the older graphics chips will not support the newest DirectX 12 API. Utilizing DirectX12 and rendering a few textured triangles require an initialisation code thats approximately 1000 lines. To have a separate DirectX9 rendering engine for compatibility, thats another 500-ish lines. The story doesn't end there though – that's just for Windows compatibility.
DirectX as we know does not exist on Linux, MacOS, iOS or Android based devices. These platforms rely on OpenGL or Vulkan, except Mac. To support these platforms, you have to implement these APIs as well. Porting your graphics engine to OpenGL 1.1 will take about 500 more lines (we still speaking from a triangle based renderer that can do texturing). However, this only covers the desktop OpenGL, which does not exists on mobile phones. Mobiles have a different variation of OpenGL, which called OpenGL ES. There are two separate OpenGL ES APIs. The newer one is called OpenGL ES3, which is similar to DirectX 11, you have to write a shader based programmable pipeline to handle it. This is about 1000 lines, and its not backwards compatible with older phones. Older phones have OpenGL ES2 or OpenGL ES1.x. To create a program that can work on OpenGL ES1, you must again write a new renderer thats compatible.
Ok, now you have your renderer that has a separate code path for DirectX 11, DirectX9, OpenGL 1.1, OpenGL ES3, OpenGL ES1. The API implementation of the rendering alone grew past 5000 lines, and in theory now it is now capable of running on PCs, tablets, and phones. In theory. In reality, it will run only on your devices, as implementation of 3D APIs are broken. A code that works well on an nVidia chip, will maybe not work so well on an AMD, Samsung, or MALI chip. Some chips have no problem rendering non power of two sized textures, some may only work well with two-factor textures. Some maybe will crash if you allocate more than a few thousand of textures, maybe some will just give a white picture because you forgot to set a bit somewhere. Maybe on some configuration it will just crash your phone, so you have to check every single graphics vendor. You have to buy a couple of nVidia, AMD cards, VIA based laptops, everything from Mediatek, HiSilicon, AMlogic, Samsung, through tens of less common manufacturers, and you will spend the rest of the year by testing your engine on hundreds of chips, ensuring your game runs well on every chip, so its good for production. That's how bad it can get, depending on how widely supported you'd want your game to be.
We can easily see that these problems would not exist without hardware acceleration. As discussed previously, 3D acceleration was only born because the Pentium1 was too weak to produce "next generation graphics" at a suitable resolution. This was, however, 25 years ago. Since then, our CPUs have become more than 100x faster, so this is not an issue any more. I can play games like Quake 2 and Half-Life at very high resolutions and at high framerates, without breaking a sweat.
So the TL;DR is that the OpenMW project would heavily benefit from offering a software rendering mode. Not only does it decrease the amount of code and programming overhead necessary for the project, but it also means that OpenMW can be ported to a much wider variety of platforms. Including niche devices like embedded hardware or weaker portables.
Nowadays, an overwhelming majority of games and applications use hardware rendering through DirectX or OpenGL, we know this. The OpenGL or DirectX API allows programmers to use the graphics card to generate the 3D images on your screen. The DirectX and OpenGL interface is supplied by the graphics drivers of your graphics cards. This method of producing visuals is called hardware acceleration, or hardware rendering. When the graphics card is not involved in the creation of the 3D picture, the procedure is called software rendering (of course).
But In the 90s, we had no 3D acceleration at all. So on 386's, 486's, and Pentium computers, the games produced the graphics by software rendering only. So games like the original Tomb Raider had to rely on the CPU, and not some expensive graphics card.
As CPUs became faster, the quality of the software rendering increased. After we reached the 166MHz Pentium 1, 640x480 became playable with a few thousand polygons. This was enough for most of the games at that time, however, to keep the rendering playable, they had no texture filtering, and very few effects. Later, the S3 Virge graphics card and the 3dfx Voodoo 1 introduced, allowing filtered textures and faster 3D performance than software rendering. Of course, to access these capabilities, developers had to rewrite the code to support these chips. First every manufacturer supported their own proprietary API, but later the interfaces was sort of standardised. OpenGL and DirectX (Direct3D) compatible drivers became widely available. In 1997, they were able to get much as double the performance compared to software rendering with these early graphics chips. They offered more fps and better graphics quality, so programmers began to switch to hardware rendering as the primary method of producing graphics.
But what is the situation now?
Currently we have three DirectX variations available on Windows. DirectX9 is mostly used for legacy titles, and uses fixed function rendering with the support for shaders. DirectX 11 supports programmable pipeline only, and DirectX 12 is designed for highly parallelised rendering. These three APIs are totally incompatible with each other, so gpu-reliant developers have to write three separate implementation for each of these APIs. If a programmer decides to write a game for DirectX12, then he must also write a separate rendering engine for DirectX9 as the older graphics chips will not support the newest DirectX 12 API. Utilizing DirectX12 and rendering a few textured triangles require an initialisation code thats approximately 1000 lines. To have a separate DirectX9 rendering engine for compatibility, thats another 500-ish lines. The story doesn't end there though – that's just for Windows compatibility.
DirectX as we know does not exist on Linux, MacOS, iOS or Android based devices. These platforms rely on OpenGL or Vulkan, except Mac. To support these platforms, you have to implement these APIs as well. Porting your graphics engine to OpenGL 1.1 will take about 500 more lines (we still speaking from a triangle based renderer that can do texturing). However, this only covers the desktop OpenGL, which does not exists on mobile phones. Mobiles have a different variation of OpenGL, which called OpenGL ES. There are two separate OpenGL ES APIs. The newer one is called OpenGL ES3, which is similar to DirectX 11, you have to write a shader based programmable pipeline to handle it. This is about 1000 lines, and its not backwards compatible with older phones. Older phones have OpenGL ES2 or OpenGL ES1.x. To create a program that can work on OpenGL ES1, you must again write a new renderer thats compatible.
Ok, now you have your renderer that has a separate code path for DirectX 11, DirectX9, OpenGL 1.1, OpenGL ES3, OpenGL ES1. The API implementation of the rendering alone grew past 5000 lines, and in theory now it is now capable of running on PCs, tablets, and phones. In theory. In reality, it will run only on your devices, as implementation of 3D APIs are broken. A code that works well on an nVidia chip, will maybe not work so well on an AMD, Samsung, or MALI chip. Some chips have no problem rendering non power of two sized textures, some may only work well with two-factor textures. Some maybe will crash if you allocate more than a few thousand of textures, maybe some will just give a white picture because you forgot to set a bit somewhere. Maybe on some configuration it will just crash your phone, so you have to check every single graphics vendor. You have to buy a couple of nVidia, AMD cards, VIA based laptops, everything from Mediatek, HiSilicon, AMlogic, Samsung, through tens of less common manufacturers, and you will spend the rest of the year by testing your engine on hundreds of chips, ensuring your game runs well on every chip, so its good for production. That's how bad it can get, depending on how widely supported you'd want your game to be.
We can easily see that these problems would not exist without hardware acceleration. As discussed previously, 3D acceleration was only born because the Pentium1 was too weak to produce "next generation graphics" at a suitable resolution. This was, however, 25 years ago. Since then, our CPUs have become more than 100x faster, so this is not an issue any more. I can play games like Quake 2 and Half-Life at very high resolutions and at high framerates, without breaking a sweat.
So the TL;DR is that the OpenMW project would heavily benefit from offering a software rendering mode. Not only does it decrease the amount of code and programming overhead necessary for the project, but it also means that OpenMW can be ported to a much wider variety of platforms. Including niche devices like embedded hardware or weaker portables.