Yeah, when I was on xfce on Arch I remember going into some places in the file manager where it wouldn’t let me edit files etc without running it from the terminal through sudo.
e
- 0 Posts
- 24 Comments
Is there a technical reason that Linux apps can’t/don’t just pop up an authenticator thing asking for more privileges like Windows apps can do? Why does nano just say that the file is unwriteable instead of letting me increase the privileges?
AdrianTheFrog@lemmy.worldto Games@sh.itjust.works•Games run faster on SteamOS than Windows 11, Ars testing findsEnglish1·2 days agoThey’re already going to only ship it through Steam. As long as you’re using Steam, they don’t care.
AdrianTheFrog@lemmy.worldto Games@sh.itjust.works•Games run faster on SteamOS than Windows 11, Ars testing findsEnglish2·2 days agoYou could use Nsight, it has a Linux version and is very in depth (shows every draw call, also has one that shows very detailed CPU tasks)
Of course harder to use than presentmon
It says on that page that SHaRC requires raytracing capable hardware. I guess they could be modifying it to use their own software raytracing implementation. In any case it’s the exact same math for either hardware or software raytracing, hardware is just a bit faster. Unless you do what lumen did and use a voxel scene for software raytracing.
Yeah, that’s just rasterized shadow mapping. It’s very common and a lot of old games use it, as well as any modern game. Basically used in any non-raytraced game with dynamic shadows (I think there’s only one other way to do it, just directly projecting the geometry, only done by a few very old games that can only cast shadows onto singular flat surfaces).
The idea is that you render the depth of the scene from the perspective of the light source. Then, for each pixel on the screen, to check if it’s in shadow, you find it’s position on the depth texture. If it’s further away than something else from the perspective of the light, it’s in shadow, else it isn’t. This is filtered to make it smoother. The downside is that it can’t support shadows of variable width without some extra hacks that don’t work in all cases (aka literally every shadow), to get sharp shadows you need to render that depth map at a very high resolution, rendering a whole depth map is expensive, it renders unseen pixels, doesn’t scale that well to low resolutions (like if you wanted 100 very distant shadow catching lights) etc.
Raytraced shadows are actually very elegant since they operate on every screen pixel (allowing quality to naturally increase as you get closer to any area of interest in the shadow) and naturally support varying shadow widths at the cost of noise and maybe some more rays. Although they still scale expensively with many light sources, some modified stochastic methods still look very good and allow far more shadow casting lights than would ever have been possible with pure raster.
You don’t notice the lack of shadow casting lights much in games because the artists had to put in a lot of effort and modifications to make sure you wouldn’t.
I heard the Source 2 editor has (relatively offline, think blender viewport style) ray tracing as an option, even though no games with it support any sort of real time RT. Just so artists can estimate what the light bake will look like without actually having to wait for it.
So what people are talking about there is lightmaps, essentially a whole other texture on top of everything else that holds diffuse lighting information. It’s ‘baked’ in a lengthy process of ray tracing that can take seconds to hours to days depending on how fast the baking system is and how hard the level is to light. This just puts that raytraced lighting information directly into a texture so it can be read in fractions of a millisecond like any other texture. It’s great for performance, but can’t be quickly previewed, can’t show the influence of moving objects, and technically can’t be applied to any surface with a roughness other than full (so most diffuse objects but basically no metallic objects, those use light probes and bent normals usually, and sometimes take lightmap information although that isn’t technically correct and can produce weird results in some cases)
The solution to lighting dynamic objects in a scene with lightmaps is through a grid of pre baked light probes. These give lighting to dynamic objects but don’t receive it from them.
Still, even if any thread looks like it’s always at 60%, if a load appears and disappears very quickly and gets averaged out on the graph (as it could in an unoptimised or unusual situation) it could still be a factor. I think the only real way to know is to benchmark. You could try underclocking your CPU and see if the performance gets worse, if you really want to know.
Really? Ambient occlusion used to be the first thing I would turn on. Anyways, 4k textures barely add any cost to the GPU. That’s because they don’t use any compute, just vram, and vram is very cheap ($3.36/GB of GDDR6). The only reason consumer cards are limited in vram is to prevent them from being used for professional and AI applications. If they had a comparable ratio of vram to compute, they would be an insanely better value compared to workstation cards, and manufacturers don’t want to draw away sales from that very profitable market.
I haven’t personally played a game that uses more than one dynamic reflection probe at a time. They are pretty expensive, especially if you want them to look high resolution and want the shading in them to look accurate.
That’s true, but after a few frames RT (especially with nvidia’s ray reconstruction) will usually converge to ‘visually indistinguishable from reference’ while light probes and such will really never converge. I think that’s a pretty significant difference.
RT was three generations ago, and I don’t think they really vary the number of rays much per environment (and rt itself is an o(log(n)) problem)
There are cases where screen space can resolve a scene perfectly. Rare cases. That also happen to break down if the user can interact with the scene in any way.
If course, no renderer is really good enough unless it considers wave effects. If my game can’t dynamically simulate the effect of a diffraction grating, it may as well be useless.
(/s if you really need it)
Unless you consider wireframe graphics. Idk when triangle rasterization first started being used, but it’s more conceptually similar to wireframe graphics the ray tracing. Also, I don’t really know what you mean by ‘fake it with alpha’.
I haven’t played the finals myself but as of the pre-release version when I watched a video about it lighting didn’t update at all without raytracing enabled. It is pretty hard to get any sort of dynamic lighting without raytracing. If not impossible, depending on how you define raytracing. But basically if they have a dynamic lighting feature that works without ‘raytracing’ they have to create a whole other GI system using world-space probes and maybe even dynamically voxeliIng the entire scene. Neither of which are easy on performance, but usually not as bad as normal hardware RT and restir. Neither of those are good at reflections or fine detail, which is why games that want to look better than that usually switch to doing it the normal way.
I feel like if you have the level of a 3070 or above at 1080p, pathtracing, even with the upscaling you need, can be an option. At least based on my experience with portal rtx.
Personally I have a 3060, but (in the one other game I actually have played on it with raytracing support) I still turned on raytraced shadows in Halo Infinite because I couldn’t really notice a difference in responsiveness. There definitely was one (I have a 144hz monitor) but I just couldn’t notice it.
Optimization is usually possible, but it is easier said than done. Often sacrifices have to be made, but maybe it is still a better value per frame time. Sometimes there’s more that can be done, sometimes it really is just that hard to light and render that scene.
It’s hard to make any sweeping statements, but I will say that none of that potential optimization is going to happen without actually hiring graphics devs. Which costs money. And you know what corporations like to do when anything they don’t consider important costs money. So that’s probably a factor a lot of the time.
I disagree, I think a lot of raytraced shaders successful make the game look better while still leaning into the stylized look. I also think it’s unfair to say the game looks bad originally. It doesn’t look realistic, but it has a consistent and compelling visual style.
Look at the Minecraft update trailers for example. They go in that direction even further, by simplifying all of the textures. Yet even with the perfect offline path tracing, it doesn’t look bad.
Hmm I just tried editing some systemd service with Kate and it did actually give me an authenticator popup when I tried to save it
Although then the prompt expired and now it does nothing when I try to save it. Restarted Kate and now it works again…
I haven’t tried that before
When I try to go into the sudoers.d folder tho it just says I can’t, and the same thing happens when I try to open the sudoers file in Kate. If I try to copy and paste a systemd service in dolphin tho it just says I don’t have permission and doesn’t give a prompt.
lol if I open it with nano through sudo it says ‘sudoers is meant to be read only’