• 0 Posts
  • 170 Comments
Joined 10 months ago
cake
Cake day: January 26th, 2025

help-circle
  • Software compatibility is a problem on X as well, so I’m extrapolating. I don’t expect the situation to get better though. I’ve managed software that caused fucking kernel panics unless it ran on Gnome. The support window for this type of software is extremely narrow and some vendors will tell you to go pound sand unless you run exactly what they want.

    I’m no longer working with either educational or research IT, so at least it’s someone else’s problem.

    As for ThinLinc, their customers have asked about what their plan is for the past decade, but to quote them: ”Fundamentally, Wayland is not compatible with remote desktops in its core design.” (And that was made clear by everyone back in 2008)

    Edit: tangentially related, the only reasonable way to run VNC now against Wayland is to use the tightly coupled VNC-server within the compositor (as you want intel on window placements and redraws and such, encoding the framebuffer is just bad). If you want to build a system on top of that, you need to integrate with every compositor separately, even though they all support ”VNC” in some capacity. The result is that vendors will go for the common denominatior, which is running in a VM and grabbing the framebuffer from the hypervisor. The user experience is absolute hot garbage compared to TigerVNC on X.


  • It’s great that most showstoppers are fixed now. Seventeen years later.

    But I’ll bite: Viable software rendered and/or hardware accelerated remote deskop support with load balancing and multiple users per server (headless and GPU-less). So far - maybe possible. But then you need to allow different users to select different desktop environments (due to either user preferences or actual business requirements). All this may be technically possible, but the architecture of Wayland makes this very hard to implement and support in practice. And if you get it going, the hard focus on GPU acceleration yields an extreme cost increase, as you now need to buy expensive Nvidia-GPUs for VDI with even more expensive licenses. Every frame can’t be perfect over a WAN link.

    This is trivial with X, multiple commercially supported solutions exist, see for example Thinlinc. This is deployable in literally ten minutes. Battle tested and works well. I know of multiple institutional users actively selecting X in current greenfield deployments due to this, rolling out to thousands of users in well funded high profile projects.

    As for the KDE showstopper list - that’s exactly my point. I can’t put my showstoppers in a single place, I need to report to KDE, Gnome and wlroots and then track all of them, that’s the huge architectural flaw here. We can barely get commercial vendors to interact with a single project, and the Wayland architecture requires commercial vendors to interact with a shitton of issue trackers and different APIs (apparently also dbus). Suddenly you have a CAD suite that only works on KDE and some FEM software that only runs on a particular version of Gnome, with a user that wants both running at the same time. I don’t care about how well KDE works. I care that users can run the software they need, the desktop environment is just a tool to do that. The fragmentation between compositors really fucks this up by coupling software to display manager. Eventually, this will focus commercial efforts on the biggest commercial desktop environment (i.e. whatever RHEL uses), leaving the rest behind.

    (Fun story, one of my colleagues using Wayland had a postit with ”DO NOT TURN OFF” on his monitor the entire pandemic - his VNC session died if the DisplayPort link went down.)





  • It’s hilarious that all of this was foreseen 17 years ago by basically everyone, and here is a nice list providing just those exact points. I’ve never seen a better structured ”told ya so” in my life.

    The point isn’t that the features are there or not, but how horrendously fragmented the ecosystem is. Implementing anything trying to use that mess of API surface would be insane to attempt for any open source project, even when ignoring that the compositors are still moving targets.

    (Also, holy shit the Gnome people really wants everyone to use dbus for everything.)

    Edit: 17 years. Seventeen years. This is what we got. While the list is status quo, it’s telling that it took 17 years to implement most of the features expected of a display server back in the last millenium. Most features, but not all.



  • Because instead of just using a common well defined API, every developer is supposed to ”work together with Wayland compositors”, of which there are many, none of which are up to feature parity with X. Working together with the (at least) three major compositors is far top much work for most projects, if you can even get them on board.

    Every compositor must reimplement everything previously covered by third party software, or at least define and reimplement APIs to cover that functionality. We have been screaming about this obvious design fuckup since Wayland was first introduced, but nooo, every frame is perfect.

    Take a look at https://arcan-fe.com/ for what a properly architected display server could look like instead of the mess we currently have with Wayland. I’m holding off on moving to Wayland for many reasons, and it wouldn’t surprise me if Arcan becomes mature and fully usable before Wayland. If I get to place a bet on either on Wayland or a few guys in a basement with a proper architecture, I know what I’ll put my money on.


  • I’m not that well versed on anything Graphene, nor any related drama.

    Trust is somewhat non-technical, personal, subjective and dependent on your threat model. You can greatly improve trust via technical means and processes, as well as distribute and communicate trust via technical means. In the end, you still need to trust one or more physical people.

    Personally, my biggest issue with any software I use is future maintenance. Can I be certain this will keep working as I want it for the duration I want? Will I get security updates? There, the trust comes from the people and funding involved, seldom technology.




  • If you want to encode information into only the depth of your recursive identically named folders, you have 128 different depths, one character for the name, one for the slash, per level. Yields about 128 possible levels. Leave one off for the last filename, 127.

    If we want to name our folders something longer than a single character, we can store less files. If we want to store our files on linux, by default we get 4096 characters to play with, so about 2k levels (unless we compile our own linux kernel with PATH_MAX set for this very specific purpose). If we run CIFS we may be able to reach up to 16k levels.

    That was my interpretation of OPs (admittedly bad) idea. Personally, I try to avoid implementing inodes as Church numerals.









  • It’s literally the same chip designers, production facilities and software. Every product using <5nm silicon fabs compete for the same manufacturing capabilities (fab time at TSMC in Taiwan) and all Nvidia GPUs share lots of commonalities in their software stack.

    The silicon fab producing the latest Blackwell AI chips is the same fab producing the latest consumer silicon for both AMD, Apple, Intel and Nvidia. (Let’s ignore the fabs making memory for now.) Internally at Nvidia, I assume they have shuffled lots and lots of internal resources over from the consumer oriented parts of the company to the B2B oriented parts, severely reducing consumer focus.

    And then we have any intentional price inflation and market segmentation. Cheap consumer GPUs that are a bit too efficient at LLM inference will compete with Nvidias DC offerings. The amount of consumer grade silicon used for AI inference is already staggering, and Nvidia is actively holding back that market segment.