So I feel your pain. I did hear programming for Wayland is harder than X11, but I never did either so I have no idea if that is true.
I can't count the number of things that either no longer work or no longer even exist at all because of nothing other than the fragmented and ever-changing nature of the web. Aside from sites and apps that required flash or silverlight, I have several different pieces of expensive aactual hardware that either partially or wholly became unusable because the built in and non-updatable web interface requires an old version of java or activex.
Indeed we do all know how that went. It went like to total dogshit.
That's a big problem. When things become an optional extension for a compositor, that means you cannot reliably deploy something that depends on it to Wayland.
At this moment, things in the wild are coupling themselves to libwayland-client and in practice ossifying its ABI as a standard no matter what the wayland orgs say about it.
I'm not happy with how the collaboration and planning between various parties involved went over years and I do believe that a lot of these adoption pains are fully self-inflicted, but that has absolutely nothing to do with Wayland's technical design.
Do you mean the Window Manager layer?
That sounds like a different way of saying "impossible".
In X11 I can create an automation tool that works regardless of the underlying WM, or even if there isn't an underlying WM.
Can't do that with Wayland.
Those aren't the only two options. There's no need to compromise the entire system for everybody if the Wayland devs would agree to configuration that controls these things.
Then those of us who need stuff to work rgardless of WM would get stuff to work and the rest of the Wayland users can simply go with a WM that suits them.
This issue is typical of the thinking that went into Wayland: No consideration was made when Wayland was announced of the fact that there were far simpler ways of achieving the same level of security.
Instead of implementing it one way that works forever with any WM/DE (X11), now you must rely on each individual wayland compositor to implement one or more optional extensions correctly, and constantly deal with bug reports of people that are using unsupported or broken compositors.
bindsym $mod+r exec obs-control toggle-recording
to their configuration. What's more, they can do this in response to other system events. A user might wish to change the recording configuration of OBS in response to an application opening, and it now becomes possible to write a script which opens the application and applies the change.If your disdain for desktop isolation is so great, you needn't even use D-Bus. Registering a simple UNIX socket that accepts commands would work equally well in this case.
What's really desired here is a standard way for programs to expose user-facing commands to the system, which is clearly not within the scope of the specification for a display server. The problem with X11 is that it has for a long time exposed too much unrelated functionality like this to the user, and so many apps have become reliant on this and developers have neglected the creation of portable ways to achieve these objectives. A new specification for display servers that excludes this harmful behaviour is a clear long-term positive.
I don't think it's always practical or desired to move the hotkey support completely out of the program itself. Most users (especially consumer/nontechnical people such as many OBS users) are not willing to setup hotkeys through a third-party program to manually get it to control OBS externally... so I think it needs to support hotkeys internally, whether there is also control possible via an external socket/dbus/etc. or not.
It's extremely user hostile.
> The problem with X11 is that it has for a long time exposed too much unrelated functionality like this to the user
It's not "unrelated functionality". It's an entirely generic ability to listen to events that is available with Wayland as well, just with an added restriction.
Sounds like a nightmare for everyone involved to me
If there was a single standard way, great. In the meantime I'll stick to X11, which isn't this incredibly user-hostile.
The extra security meant many automation tasks need to be done as extensions on composer level making this even worse
When designed by committee.
With conflicting interests.
And Veto Powers.
It's definitely not Unix-like, since file handles and writes and epoll, and mmap for IPC are nowhere to be found. Instead you have 'objects' with these lifecycle methods that create/release resources (probably committing the design sin of having these for things which should be pure data, like descriptors).
What's with these XML headers? It's UNIX standard stuff, to have a C API for your code, that declares an API for a library, and then a makefile can just consume it. There's a standard way of supplying, finding and consuming them. Even binding generators are probably more comfortable with C headers, than this XML thing
And what's with the callbacks for everything, like screen resolution queries? In Win32, you can do it with a single synchronous API call that returns a struct that has all the info. It's not like you have to touch the disk or network to get this. In cases where you do, you usually have a call that dispatches a message to another window (which you can also dispatch yourself), and you have to listen to the response.
I did some X11 programming as part of work, and its entirely reasonable and conventional compared to this, much more like Win32 (even maybe a bit more pleasant, but I'm no expert on it).
The API sounds awful (and I've had ChatGPT generate me some example programs, and it's somehow even worse than the author describes), and not only that, the requirement of 'everything be an object', with chains and trees of objects being created introduces a huge source of bugs and bookeeping performance overhead on the application side.
Yes, you do have to do something like this with some things under Windows, but the reason for this is that these objects have duplicates in the Windows kernel.
But here it looks like this is just to satisfy the sensibilities of the designer.
Honestly this sounds like the most epic case of NIH syndrome. Like these guys wanted to write their own OS and userland and break with existing conventions.
Win32 has exactly the same setup of problems here as wayland does. Moreso because Win32 just gives you back opaque handles which you are expected to keep track of and use the Win32 API to do any meaningful interactions.
The only understandable complaint is that wayland makes it hard for different windows to interact with one another for security. IMO, that's a silly goal to chase after, but that's just me.
The only problem that has existed is that originally there was a single DPI value, not a different DPI value for each monitor.
This has never created any problem for the people using multiple monitors with the same resolution, but only for the people who have used multiple monitors having different resolutions and who might have not liked the changes in windows size when moving a window from a monitor to another monitor.
That was indeed a problem, but it really affected a rather niche use case and it was also trivial to solve without any change in the X11 design, by just making DPI a per monitor variable, which was done long ago.
So criticizing X11 about a supposed problem with HiDPI is incorrect. I have used only multiple 4k monitors with my PCs, with X11, for more than a dozen years and I never had any problem with HiDPI, with the exception of many Java programs written by morons, which ignore the system settings and which also do not allow the user to change the font used by them. I do not know which is the problem with the Java programmers, but I never encountered programs with such a behavior, except those written in Java. Moreover, the Java programs are also the only that had problems with monitors using 10-bit per color component.
While X11 itself never had problems with supporting HiDPI, at least not in the XFCE that I am using, I heard that other desktop environments have created problems with HiDPI that have nothing to do with X11, by not exposing the X11 DPI settings but providing instead some "window scaling" settings, which is something that I do not know how it is implemented, but there are good chances that it is implemented in a wrong way, judging from the complaints that I have seen. I cannot imagine how one could use correctly a "window scaling" factor, because the font rendering program must know the true DPI value when rendering for instance a 12-point font. If rendering is done at a wrong DPI and then the image is scaled, the result is garbage, so in that case it would not be surprising that people claimed that HiDPI works badly in X11, when in fact it was Gnome or whatever desktop environment was used who was guilty for bad support, not X11. I never had to fight with those desktop environments, but I assume that even those would have worked correctly with HiDPI, when using xrandr to configure X11, instead of using the settings of the desktop environment.
These kind of posts just show how disconnected from reality some of y'all are from what most Linux desktop users nowadays actually need from the desktop platform.
Even without configuring distinct DPIs per monitor that was not a problem for me, because on the small screen of the laptop I kept only some less important application, like the e-mail program, while working on the bigger external displays, so I had no reason to move windows between the small screen of the laptop and the bigger external displays.
But like I said, setting a different DPI value for each monitor has been added to X11 many years ago, I do not remember how many.
I do not see why one would want to move windows between the external displays and the laptop, when you have connected external displays, so I consider this a niche use case, i.e. moving windows between small screens and big screens. I agree with you that having simultaneously big screens and small screens is not niche, so I was not referring to this.
Without a per-screen DPI value you cannot control the ratio between the sizes of a window when is moved between the big screen and the small screen, but even when you control the ratio, moving windows between screens of different sizes does not work well because you must choose some compromise, e.g. if you keep the same physical size some windows from the big screen will not fit on the small screen and if you make the windows occupy the same fraction of the screen size they will change their sizes during moving and they will be more difficult to use on the small screen.
But like I have said, this no longer matters as the problem has been solved even for this niche use case. I do not even remember if this problem still existed by the time when Wayland became usable.
However, such a thing could be relatively easily added to X11 without changing the X protocol, so this does not appear as a sufficient motivation for the existence of Wayland.
I have not tried Wayland yet, because I have never heard anyone describing an important enough advantage of Wayland, while it definitely has disadvantages, like not being network transparent, which is an X11 feature that I use.
Therefore, I do not know which is the truth, but from the complaints that I have heard the problem seems to be that in Wayland it is not simple to control the access rights to windows and clipboards.
Yes, access to those must be restricted, but it must be very easy for users to specify when to share windows with someone else or between their own applications. The complaints about Wayland indicate that this mechanism of how to allow sharing has not been thought well. It should have been something as easy as clicking a set of windows to specify something like the first being allowed to access the others, or like each of them being able to access all the others.
This should have been a major consideration when designing access control and it appears that a lot of such essential requirements have been overlooked when Wayland was designed and they had to be patched somehow later, which does not inspire confidence in the quality of the design.
At a higher level, I've never found someone who is deeply familiar with the Linux GUI software stack who also thinks Wayland is the wrong path, while subjectively as a user most or all of my Linux GUI machines are using Wayland and there's no noticeable difference.
From an app dev perspective, I have a small app I maintain that runs on Mac and Linux with GPU acceleration and at no point did I need to a make any choices related to Wayland vs X.
So, overall, the case that Wayland has some grave technical or strategic flaws just don't pass the smell test. Maybe I'm missing something?
That means that I can run a program, e.g. Firefox, either on my PC or on one of my servers, and I see the same Firefox windows on my display and I am able to use Firefox in the same way, regardless if I run it locally or on a server.
The same with any other program. I cannot do the same with Wayland, which can display only the output of programs that are running on my PC.
This an example of a feature that is irrelevant for those who have a single computer, but there are enough users with multiple computers, for which Wayland is not good enough.
Wayland was designed to satisfy only the needs of a subset of the Linux users. This would have been completely fine, except that now many Linux distributions evolve in a direction where they attempt to force Wayland on everybody, both on those for which Wayland is good enough and on those for which Wayland is not good enough.
I have already passed through a traumatic experience when a gang of incompetents have captured an essential open-source project and they have removed all the features that made that project useful and then they have forced their ideas of what the application should do upon the users. That happened when KDE 3.5 was replaced by KDE 4.
After a disastrous testing of KDE 4 (disastrous not due to bugs but due to intentional design choices incompatible with my needs), I reverted to KDE 3.5 for a couple of years, until the friction needed to keep it has become so great that I was forced to switch to XFCE. At least at that time there was an alternative.
Now, Wayland does not have an alternative, despite not being adequate for everybody. For now, X11 works fine, but since it seems unlikely that Wayland will ever be suitable for me, I am evaluating whether I should maintain a fork of X11 for myself or write a replacement containing only the functionality that I need. That would not be so complex as there are many features of X11 or Wayland that I do not use, so implementing only what I really need might be simple enough. The main application that I do not control would be an Internet browser, like Firefox or Chromium, but that I could run in a VM with Wayland, which would be preferable for security anyway.
Not done it for a while, but ssh into remote machine and start a GUI app used to work for me. It needs one setting in ssh config AFAIK.
> subjectively as a user most or all of my Linux GUI machines are using Wayland and there's no noticeable difference.
Not anymore. There used to be though, and i think it may have got a bad name by being rolled out by distros (especially as the default) before it was ready for most users. I can remember several issues, the worst was with screenshots.
I can accept that as I use a rolling release distro as a daily driver, so you expect some issues, but its not OK if those rough edges hit people using more mainstream distros (which I think they did)
Window positioning API seems like the biggest oversight to me, as someone developing a multi window desktop app at work. That and global hotkeys and accessibility.
Due to my reliance on X over SSH i only run Wayland where i strictly need it (my HDR display, namely)
Actually for wayland there is wprs for remote display of apps so here it goes the network transparency argument...
I do not know the current state of RDP etc., but does it allow you to open a single application rather than an entire desktop on Linux, and does it display correctly for the device you are using rather than the one the app is running on?
My question was "How often has that been a problem. Is it a vulnerability that has been, or or likely to be, exploited in practice?
I would have thought that filesystem access is the biggest issue, followed by network access. There are solutions for these, of course, but in most cases the default is either unrestricted or what the app asked for.
>Getting any example application to work is so incredibly ridiculous, that every second I program on Wayland, I yarn for the times I did Win32 programming.
And that comes from the core of how Wayland is designed.
In Win32, the stable interface / ABI is the set of C functions provided by the operating system through DLLs. These are always dynamically loaded, so Microsoft is free to change the internal interface used for controlling windows at any time. Because of this, decades-old .exes still run fine on Windows 11.
In Wayland, the stable interface is the binary protocol to the compositor, in addition to the libwayland-client library plus extensions. Instead of that socket being an "implementation detail", it's now something that all programs that just want to make a window have to deal with. You also can't just use the socket and ignore the libwayland libraries, because mesa uses libwayland-client and you probably want hardware acceleration.
The other big issue is the core Wayland protocol is useless; you have to use a bunch of protocol extensions to do anything, and different compositors may implement different versions of them. On Win32, Microsoft can just add another C function to user32.dll and you don't have to think about protocol how that gets transformed into messages on the socket layer, or compatibility issues with different sets of extensions being supported by different compositors.
Anyone know of exceptions? People who get mesa working anyhow, some way?
It also doesn't preclude people from making nicer experiences on top of libwayland. Again I'd be curious to see what material is out there. It feels like a library that materializes the current state of things into a local view would go a long way to dispel the rage of people such as the author here, who seem to detest callbacks with firey rage.
The idea of the wayland registry seem unifying & grand to me. Ok yes it's async & doesn't hide it? Lot of ink spilled to be upset about that, and doesn't feel like an immutable fact that must govern life, if that for some reason makes you as mad as this dude.
You don't have to use Mesa's wayland-egl to make EGL work with Wayland, you can easily pass dmabufs by yourself - though this will theoretically be less portable as dmabufs are Linux specific (but practically they're also implemented by various BSDs).
Some compositor's insistence on CSD can make it a bit more complex since you get that in Win32 for free, but on the sane ones you just add xdg-decoration and you're done.
Also, this is all apples-to-oranges anyway, as Win32 is a toolkit, while wayland-client is just a protocol (de)serializer.
I believe the youth nowadays calls what you wrote "copium". Because creating a simple window in Win32 (a whole program, in fact) looks like this:
#ifndef UNICODE
#define UNICODE
#endif
#include <windows.h>
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam);
int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PWSTR pCmdLine, int nCmdShow)
{
const wchar_t CLASS_NAME[] = L"Sample Window Class";
WNDCLASS wc = { };
wc.lpfnWndProc = WindowProc;
wc.hInstance = hInstance;
wc.lpszClassName = CLASS_NAME;
RegisterClass(&wc);
HWND hwnd = CreateWindowEx(0, CLASS_NAME, L"Hello World! Program",
WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT,
NULL, NULL, hInstance, NULL);
if (hwnd == NULL)
{
return 0;
}
ShowWindow(hwnd, nCmdShow);
MSG msg = { };
while (GetMessage(&msg, NULL, 0, 0) > 0)
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
return 0;
}
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch (uMsg)
{
case WM_DESTROY:
PostQuitMessage(0);
return 0;
default:
return DefWindowProc(hwnd, uMsg, wParam, lParam);
}
}
That's significantly less than 400 lines, and requires essentially just two function calls, RegisterClass and CreateWindowEx, the rest is the message loop and its callback.It's absolutely trivial in comparison. Same thing with Xlib; < 100 lines of C code is enough for a simple app.
Those applications seemed quite simple to write in comparison with what is described in the parent article, despite doing animation in real time based on the mouse and keyboard inputs.
What scares me though are all the responsibilities passed to compositors, because what ends up happening is that each compositor may reimplement what should be common functionality in annoying ways. This is especially true for input things, like key remapping. This ultimately fragments linux desktop experiences even harder than it was before.
It also has noticeable mouse lag for me, I really hope this isn't due to avoiding tearing.
Sandboxing defeats the point of said applications. If you want your computer to have no functionality, check out Figma. A clickable prototype sounds like precisely the security the world needs right now.
The only thing keeping those away from Linux was its market share. With npm malware on the rise, this is no longer enough of a protection.
Linux always has been a system were the existence of malware was ignored, specially Desktop, contrary to other OSes (tooling included). But since a couple of years ago can be observed (I observe) slooow movements trying to correct this colossal mistake.
If this is the best way to do it or not, I do not enter. I particularly just welcome most of the advancements about this matter in Linux due such absence of worrying, keeping my fingers crossed that the needed tooling arrives on time (ten years behind Windows, I think).
two years ago, trusted code like xz-utils [0] had seven months of freedom in the infected systems.
[0] https://news.ycombinator.com/item?id=39891607
> its not related to x11
Ideally one want to detect malware the earlier possible, and try to restrict what they can do from the beginning, until is noticed.
In this case Wayland, voluntarily or not, it's more restrictive than X11 with the access to screen and keyboard.
I know, I know, later the reply of the community will be a couple of downvotes more and "that already existed", "you could use, bla bla bla", and this is how Linux is ten years (minimal) behind Windows in tooling for this matter ¯\_(ツ)_/¯
This only matters if you compare properly sandboxed apps, otherwise an app that runs with your uid can still do harm and practically indirectly completely compromise the system..
Are most flatpaks _properly_ sandboxed? Of course not.
What it does is simple - all the functions that deal with windows/handles or events simply do not work on ones that you don't have access to, for example, the EnumWindows function allows you to wall through the tree of windows simply do not see the ones the process has no access to. SetWindowsHookEx which allows you to intercept and modify messages meant for other windows simply doesnt fire for messages you're not supposed to access.
Granted, outside of UWP apps, the application of security is rather lax (this is for legacy purposes, the security's there, just not enforced), but for apps running as admin, or UWP apps, the sandboxing is rather solid.
Moreover, it is possible to choose as the default policy that no program may access a window that it did not open, but then there must exist a very simple method for the user to specify when access is permitted, e.g. by clicking a set of windows to grant access to them.
I have experienced tearing only once, on a laptop about 10 years ago, which used NVIDIA Optimus, i.e. an NVIDIA GPU without direct video output, which used the Intel GPU to provide outputs. NVIDIA Optimus was a known source of problems in Linux and unlike with any separate NVIDIA GPU, which always worked out-of-the-box without any problems for me, with that NVIDIA Optimus I had to fiddle with the settings for a couple of days until I solved all problems, including the tearing problem.
Perhaps Wayland never had tearing problems, but I have used X11 for several decades on a variety of desktops and laptops and tearing has almost never been a problem.
However, most of the time I have used only NVIDIA or Intel GPUs for display and it seems that most complaints about tearing have been about AMD. I have always used and I am still using AMD GPUs too, but I use those for computations, not connected to monitors, so I do not know if they could have tearing problems.
It's getting a bit boring, especially since none really does more than complain.
Some people just wanna complain
Some people just want to live in the 80s forever.
I think this shaming of free software users that want to make other choices is rather terrible.
Please don't insult me by insinuating that I think that sysvinit is anything other than a weird esoteric init program which has, in the past and on linux distros, been the supporting piece of a garbage heap of poorly written shell scripts (and which is currently on BSDs the supporting piece of a relatively okay designed heap of shell scripts which implement a silly service management model that I also don't like).
Yep. Today, I would tend to agree with this.
> switching to it caused no issues
Yeah, okay, there's no need to make wild untrue claims to support your position. The initial adoption was rough, things absolutely did break, and some of those rough edges are still around to bite the unwary (enable-linger/KillUserProcesses are my "favorite" footgun that will never be fixed because systemd thinks killing your stuff is a feature).
People who grew up on sysvinit based service management and can't handle change (the partially straw man group you are complaining about).
People who only know about sysvinit based service management and systemd and formed their opinions of systemd based on "sysvinit == terrible confusing shell scripts; systemd == config files" (you - as a first impression).
And people who actually know the advantages, disadvantages, and functional details of sysvinit based service management, systemd, and the plethora of other attempts/approaches at solving these issues and can support their arguments in favour of or against systemd with actually reasoned arguments.
The first group is easy to ignore, and a minority. The third group produces the biggest chunk of well informed content on systemd. And the second group seems to think that anyone in the third group who is in favour of systemd, must be one of them, and anyone who is against systemd, must be in the first group (note also: the false dichotomy).
Rather than straw manning your opponents in this discussion while pretending this is a discussion of the pros and cons of "declarative service management", could you instead contribute something useful? Lacking that, maybe just stop trying to contribute?
By saying stuff like this, you aren't going to convert sysvinit users to anything and you aren't going to convince anyone who has genuine criticism of systemd of anything.
There are other ancient service management systems that were much more coherent and which did not show any disadvantage in comparison with systemd, e.g. even the sysvinit-based service management of FreeBSD or other *BSD, which were and are much better than the "sysvinit-based" of old Linux.
An example of how a replacement for the traditional UNIX service management can be well designed was the daemontools of Daniel J. Bernstein, written more than a quarter of century ago, long before systemd. There are derivatives of daemontools that are brought up-to-date and they are much simpler and more secure than systemd, while systemd does not have any advantage that can justify its complexity, opacity and interference with other applications.
All the non-systemd service management solutions have the advantage that even if you are not familiar with a computer, it is easy to debug any problem because all the behavior is written in a bunch of text files. With systemd, you can never be sure what happens. The behavior implemented by systemd may change between versions, you might not have the source of systemd or the source of the particular version installed on the computer with problems, the source may be very complex in comparison with traditional shell scripts or with the very simplified scripts of something like daemontools.
Thus claiming that using systemd uses "descriptive" files is not really true, because the uncertainties about what those files describe are much greater than for any other service management solutions.
Even a set of shell scripts, like that of FreeBSD, can be as "descriptive" as the configuration files of systemd, when all the service scripts share a common set of definitions of functions and of configuration parameters.
There is nothing descriptive about `Wants`, `Before`, `After`, `Requires`, `Requisite`, `BindsTo`, `PartOf`, `Upholds`, `Conflicts`, ... we could go on. And we can stop there. (To clarify, _I_ know what these all mean, but certainly I didn't have a clue what they meant until I read the docs about and didn't fully understand the nuances of these until I re-read those docs many times and read the source code.)
But the "declarative"-ness of systemd's configuration files can also be put into question when it's incredibly common to find an `ExecStartPre` containing a shell oneliner.
That being said, my goal was not to start a discussion about systemd here. My goal was to call out the completely unproductive strawmanning of systemd critics by the person I was replying to.
Especially coming from people who don't put in the work to build something else.
It's really bizarre how the opensource community degraded into this space of constant aggresive, insulting screeching over every single project thats actually moving something in the Linux world. Coming from people who don't put any code behind it or do anything but attack any change since 1990s.
To hell with that, Linux developers deserve better than this constant barrage of hate.
They backed systemd. I think you need to stop with conspiracy thinking and admit to yourself that maybe the solution was actually better than before. And as such, if you build something even better, they'll switch too.
But it has to BE better, not just a pile of yelling.
The problem with systemd and Wayland is that they are not like any other projects which the users may choose depending on how useful they are for their needs.
Wayland and systemd are forced by a few distribution maintainers upon a great number of Linux users, regardless of what those users may want.
Many users may not be directly impacted by these changes, so they may trust that the maintainers know what they are doing.
But there are also many users for which these replacements would require a lot of work so such users would expect a better justification of why systemd or Wayland are an improvement over alternatives. I have seen tons of presentations about systemd or Wayland, but none of them were convincing. There were never any correct comparisons with alternatives to show that systemd or Wayland are better at something.
I agree than it would be very desirable for X11 to be replaced by something better.
But I have never seen any piece of information that would indicate that Wayland is better. On the contrary, almost every detail that I learn about Wayland shows a bad design decision.
For example, before reading the parent article, I was not aware that the Wayland client API is so reliant on callback functions. In my opinion, this is bad because such an API is inefficient, as it leads to a lot of code duplication.
In my opinion, the cases when it is a good choice to use callback functions are very rare. Instead of callback functions, it should have been better to use some kind of event queue, because there is little else that callback functions can do, except inserting the event into a queue, for handling by the main thread.
The only "advantage" of callback functions is that the implementer of the API might have chosen a bad implementation of an event queue, while an API based on callback functions is not yet committed to a particular queue implementation, allowing the user of the API to do the right thing, but possibly with a waste of code in the initial part of all callback functions.
Avoiding the choice of an implementation for the event queue could still be done efficiently if there were a single callback that you could use for all Wayland functions, which would be your own implementation of the queue insertion function. This would be a good API, as there would be no code duplication, while also not forcing an implementation choice. Multiple callback functions make sense on the server side of a protocol, not on the client side of a protocol, because the messages passing through the protocol might be seen as remote procedure calls originating from the client.
Why even try an start a conversation with that attitude? Wayland doesn't get nearly as much hate as Windows, Chrome, or iOS. But I guess literally nothing is worth writing an article that has the word "fuck" in it 7 times, because that crosses some kind of ultimate line?
For other open-source applications, if you do not like them you do not install them and you choose something else. There is no reason for any complaint.
On the other hand, you may have used some Linux distribution for a decade and then someone forces upon you systemd and/or Wayland, regardless whether you want them or not.
In such cases it is very reasonable to complain about this, because whoever has chosen systemd and Wayland now forces you to do a lot of unnecessary work, either by changing your workflow to accommodate them or by switching to another distribution, which also requires a new workflow.
I have not switched to either systemd or Wayland, because I have never seen anyone capable to explain even a single advantage of them over what I am using.
I have tested once systemd, by installing Arch and using it for a month, but I have found a bug so ugly that my opinion about the technical competence of the systemd designers has dropped so low that I have never tried it again.
I am using Gentoo, which unlike other distributions does not yet force the choices of the maintainers upon the users, so I can still choose to not use either systemd or Wayland. However, I am worried about the future because both of them continue to invade other software packages, so even without using the complete systemd you may need to use some parts extracted from it, because other traditional packages have been substituted with packages that depend somehow on systemd.
Eventually, it is likely that I would have to write myself replacements for those packages, to expel completely systemd, but I hate to do such unnecessary work when I was happy with the older packages, which worked perfectly fine and they needed no replacement.
We can argue about limitations of X.org's implementation of the X server, but, as demonstrated by Phoenix, X.org doesn't have to be the only X server implementation.
https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
There is no async involved so function coloring argument doesn't really apply here.
I don't share author's hate for them, but they are definitely more verbose than popping form event queue and switch statement on event type ala SDL loop. Plenty of callbacks just set parameters in some state struct and do not propagate further. And you need to fill the vtable structs, and register that as listener. This boilerplate is probably the reason why basic window examples have ~200 lines instead of 40. But in larger project this is barely a problem.
At the same time, most of this post really is just a rant essentially saying that a low-level library is so flexible that using it directly results in code so verbose it can hardly be read. Yes, that's how good low-level designs always are.
You can turn a generic portable asynchronous ANSI C interface into a simple, blocking and platform-specific one with an abstraction layer. You can integrate it with all sorts of existing event loops and programming frameworks. You can customize it all you like but using it directly in an application will cost you a lot of patience. At the same time, you can't go in the opposite direction; from a "simple" blocking black-box interface to something that can reasonably host a complex GUI toolkit. If you're after simplicity, go higher-level.
At the very least there should be a standardized (and centralized) client library on top of Wayland but below widget frameworks like GTK or Qt which implements the missing "desktop window system features": opening, moving, sizing windows (with decorations please), mouse and keyboard input events, clipboard, drag-and-drop. Non-GTK/Qt applications should never have to talk directly to the asynchronous Wayland APIs, only to this wrapper library.
Such a library should be designed to make programmers want to move on from X11 (because writing code against Xlib is horrible, but somehow Wayland managed to be even worse) - and tbh, this new window system client library (at first on top of X11) should have been the top priority of the Wayland project before starting to work on the actual X11 replacement, and it should have shipped on all desktop Linux distros at least a decade ago so that application programmers could have been won over (and for collecting feedback) even before Wayland shipped its first version.
Who do you think work on the various parts in Wayland if not "burned out hobbyists"?
Not to draw any specific analogy, but sometimes a fussy low-level interface is just important to have.
Vulkan's "API design deficits" (to put it mildly) have been recognized by Khronos though, and turning that mess around and making the API a "joy to use" is one of Khronos' main priorities at the moment (kudos to them for doing that).
Regardless, that's sort of my point: having a lower level fiddly layer is a desirable quality, and Xlib being rebased on top of it isn't exactly a counterexample.
Because those libraries will not materialize in time, and more importantly the hobbyists who are supposed to write those libraries don't have the testing capabilities of large organizations (e.g. testing across hundreds of hardware configurations).
It satisfies the requirement to "make easy things easy, make hard things doable" and it also gets you cross platform support.
> Make easy things easy. Make hard things doable.
is generally unachievable. Instead, pick one:
- easy things easy, hard things impossible
- easy things tedious, hard things possible
(Unless you want to maintain two sets of interfaces in parallel.)
Do people recommend the API surface should be totally flat and the same for all developers?
I've been struggling with this initially as well, it's pretty poorly explained in docs. Short explanation:
Wayland-client library implements a queues over the socket. So to get it, you have to think about when is the socket read from and written to, and when are the queues pulled from or pushed to. There is always a default queue, but for example EGL+OpenGL creates it's own queue, which further makes it more confusing.
- `wl_display_dispatch_pending()` only pulls messages from default queue to callbacks
- `wl_display_dispatch()` also tries to do blocking read on the socket if no messages are in queue
- quite recently `wl_display_dispatch_queue_timeout()` was finally added, so you can do non-blocking read from the socket. earlier you had to hack the function yourself
- `wl_display_flush()` writes enqueued messages in queue to socket
- `wl_display_roundtrip()` sends a ping message and does blocking wait for response. the purpose is that you also send all enqueued requests and receive and process all responses. for example during init you call it to create registry and enumerate the objects, and you call it for second time to enumerate further protocol objects that got registered in registry callback, such as seat
- `eglSwapBuffers()` operates on its own queue, but reading from socket also enqueues to default queue, so you should always call `wl_display_dispatch_pending()` (on default queue) afterwards
There is also a way to get around being stuck in `eglSwapBuffers()` during window inhibition: disable the blocking with `eglSwapInterval(0)` and use `wl_surface_frame()` callback, and you get notified in callback when you can redraw and swap again. But you can't do blocking reads with `wl_display_dispatch()` anymore, have to use the timeout variant. After using it this way, you can also easily manage multiple vsynced windows independently on the same thread, and even use wayland socket in epoll event loop. None of this is documented of course.
The clipboard interface is definitely compromised a bit by being shared with drag-and-drop events, but it's not that complicated. Also there is a pitfall when you copy-paste to your own application and don't use any async event loop, you can get deadlocked by being expected to write and read on the same file descriptor at the same time.
The API feels like a hardcore OOP/C++ developer's first C interface.
No you don't need to reinvent the wheel thank you.
Anyway, if I was persuaded that Wayland has a rather backwards design (here my reasons: https://news.ycombinator.com/item?id=47477083), now I have the confirmation that its philosophy is something like "put surfaces on the screen and distribute events to the clients, all the other stuff is not my business", and that exploring alternative approaches to window management is still worth it. Having applications that manage all their resources (canvases, events, decorations) is not bad per se (for example video games), but not all of them need to.