Now if I ever get around to writing that terminal emulator for fun, I'll be tempted to do it with this algorithm for the code's aesthetic appeal.
My utopian vision: First registration is free and automatic. Copyright holders get an automated notification of expiring copyright and renewal is, say $1000 for the first term (adjusting for inflation) and doubling thereafter (also adjusting for inflation, so you don’t get a $2000 renewal but more like $4400 with 4% inflation). For corporate-held and posthumous extensions, the term would be 10 years.
If copyright was inifinite, then Disney would never have been able to make Snow White in the first place. They didn't invent the story!
Even if they did, it seems like a huge negative to society for copywright not to expire.
Which is kind of the entire point of patents, just that they last way too long relative to the speed of technological progress
Which makes sense--I don't doubt that he is a subject matter expert where this patent is concerned. If this algorithm continues to be widely used or its use increases, then that would be likely be good for him.
https://web.archive.org/web/20260317185928/https://terathon....
A Professional Equation Editor for Windows 10/11 for 60$ that uses Slug for rendering. Presumably he‘s using it to write his great FGED books.
(I get it. It's an awesome replacement for MathType. It uses OLE so that it embeds in Microsoft Word nicely. Still...)
I'm pretty confident the "stack" is C++ on Win32, with a bunch of hand-rolled libraries and no stdlib.
> I don't actually know many people still doing any of this sort of work on Windows.
Most journals don’t want submissions in Word (there are notable exceptions, e.g. Nature), and conferences without massive editorial budgets want their submissions in a format that makes it easy for them to produce proceedings (again, not Word).
I don’t know to what extent Typst is taking off recently.
I personally wrote my thesis in LuaTeX with figures in TikZ. I have no great love for the TeX language [0] or TikZ, but there are three great properties of this stack that Word lacks:
1. It plays well with version control.
2. The output quality can be very high.
3. You can script the generation of figures, including text and equations that match the formatting of the containing document, in a real programming language, without absurd levels of complexity like scripting Word. So I had little Python programs that printed out TikZ.
No, I do not expect the average high school teacher to do this.
[0] In fact, I think both the language and the tooling are miserable to work with.
Overleaf has done a pretty good job of removing the tooling pain points, but honestly Typst can't take over soon enough.
> The output quality can be very high.
It can also be very low
Most primary, secondary, and pre-university school teachers without an institutional understanding of LaTeX, which admittedly has an extremely high (technical, not financial) barrier to entry compared to Microsoft Word + MathType. This is what my secondary school teachers used, for instance. They're given bog-standard laptops with Windows to work with.
Also exam setters and writers in places like Cambridge University Press and Assessment. If you took a GCSE, O-level, or A-level exam administered by them, it had pretty high quality typesetting for maths, physics diagrams, chemistry skeletal diagrams and reaction pathways... But almost none of it was done with LaTeX, and instead probably all add-ons to Microsoft Word or Adobe InDesign.
> It's an awesome replacement for MathType. It uses OLE so that it embeds in Microsoft Word nicely.
But that's the rub - OLE doesn't embed particularly nicely. I haven't used it in over a decade (maybe two?). It's sort of very softly deprecated.
The new equation editor in Word which isn't based on MathType, and doesn't use OLE, works much more smoothly than the old one, even if it doesn't support everything. ("New"? I just checked and it was introduced in 2007!) I think a typical user would have to be really desperate for extra functionality to abandon that level of integration, at which point you'd probably switch away from Word altogether.
Damn dude didn't you pay like ... over $10k for that patent?
Also, Microsoft's Loop-Blinn patent for cubic curves will expire on March 25. These might change the landscape of text rendering...
At the time they were going with, approximating the curves out of triangles. I don't know if they're still doing that though.
vello will probably do great under very heavy loads but for your average UI or text document? i reckon something far simpler (like slug!) will end up being more efficient simply by avoiding all those shader launches
Harfbuzz is only one piece of the puzzle, it's not a text renderer, only a 'text shaper' (e.g. translating a sequence of UNICODE codepoints into a sequence of glyphs). The actual font handling and text rendering still needs to be done by some other code (e.g. the readme in Mikko Mononen's Skribidi project gives a good overview about what's needed besides the actual rendering engine: https://github.com/memononen/Skribidi/)
This is cool but I did not know software patents were still a thing in the US.
a) The winding number of a point is the number of intersections of a scanline and a closed path.
b) The winding number around a point is the total angle subtended by the path at that point.
Slug uses approach a) and that comes with a lot of edge cases (see chart in the post) and numerical precision issues. The approach by loop & blinn uses b) and is thus simpler and more robust. Likewise the patent on that one expired too: https://news.ycombinator.com/item?id=47416736#47420450
Also just to clarify regarding this statement:
> Slug uses approach a) and that comes with a lot of edge cases (see chart in the post) and numerical precision issues
Slug does not have numerical precision issues. It's the breakdown into different cases that _solves_ those issues, whereas your statement makes it sound like slug has _both_ the case complexity and the precision issues.
The original paper did assume no overlap yes. But that is not how anybody would implement it. For a long time one would use the stencil buffer with different operations depending on the front-face / back-face (this is where the paths rotation around the sample comes in and what makes this an angle based approach).
> which requires a complicated triangulation step. It can produce some nasty geometry in more complex cases.
Again, not how anybody would implement this. You can just stream the quadratic bezier curves unprocessed into the vertex shader, literally the simplest thing conceivable.
> With Slug, you can use only 1 quad per glyph if you want.
Nowadays one would probably implement loop & blinn in a tiled compute shader too (instead of using stencil buffers) to reduce memory bandwidth and over draw. That way you also get one quad per glaph, but without any of the geometry special casing that Slug does.
> It's the breakdown into different cases that _solves_ those issues, whereas your statement makes it sound like slug has _both_ the case complexity and the precision issues.
Correct, might have worded that badly. Still remains a trade off in a) which b) does not have.
I don't see a straightforward way to apply this technique in a pixel shader that includes multiple curves per triangle. I feel like any attempt to do that will approach the complexity of Slug, but maybe it's my own shortcoming that I don't see it. I would love to read more detailed information on that if you have it.
[1] https://medium.com/@evanwallace/easy-scalable-text-rendering... [2] https://web.archive.org/web/20180905215805/http://www.glprog...
Yes, they describe one variation of the angle based method to winding numbers by spanning a triangle fan from an arbitrarily chosen pivot point / vertex.
> if you want to do distance based anti-aliasing rather than supersampling
Particularly when it comes to rendering vector graphics I think of analytic anti-aliasing methods as somewhat cursed and prefer multisampling [0], at least for magnification. For minification mip-mapping remains the go to solution. However, if you only render 2D text on a 2D plane, which is typically overlap free, then these correctness issues don't matter.
> I don't see a straightforward way to apply this technique in a pixel shader that includes multiple curves per triangle
All modern vector renderers I know of avoid triangle rasterization entirely. Like I said, they typically do tiles (screen space partitioned into quads) in a compute shader instead of using the fixed functionality with a fragment / pixel shader. The reason is that nowadays compute is cheap and memory bandwidth is the bottle neck. Thus, it makes sense to load a bunch of overlapping geometry from global memory into workgroup shared memory, render all of it down to pixels in workgroup shared memory, and then only write these pixels back to the framebuffer in global memory.
> I feel like any attempt to do that will approach the complexity of Slug
A highly optimized implementation might very well, yes. Yet, handling the many cases of intersections of the path and the scanline won't be contributing to the complexity, which is what started this discussion.
> I would love to read more detailed information on that if you have it.
I implemented the outdated stencil buffer + triangle fan + implicit curves approach [1] if you want to take a look under the hood. The library is quite complex because it also handles the notoriously hard rational cubic bezier curves analytically, which Slug does not even attempt and just approximates. But the integral quadratic bezier curves are very simple and that is what is comparable to the scope Slug covers. It is just a few lines of code for the vertex shader [2], the fragment shader [3] and the vertex buffer setup [4].
Edit: You can even spin loop & blinn into a scanline method / hybrid: They give you the side of the curve your pixel is on [5], which is typically also the thing scanline methods are interested in. They compute the exact intersection location relative to the pixel, only to throw away most of the information and only keep the sign (side the pixel is on). So, that might be the easiest fragment shader vector renderer possible. Put it together in a shader toy [6] a while back.
[0]: https://news.ycombinator.com/item?id=46473247#46530503 [1]: https://github.com/Lichtso/contrast_renderer [2]: https://github.com/Lichtso/contrast_renderer/blob/main/src/s... [3]: https://github.com/Lichtso/contrast_renderer/blob/main/src/s... [4]: https://github.com/Lichtso/contrast_renderer/blob/main/src/f... [5]: https://news.ycombinator.com/item?id=45626037#45627274 [6]: https://www.shadertoy.com/view/fsXcDj
Well, now you know of a modern renderer that does use triangle rasterization. The reason is simple -- Slug was designed to render text and vector graphics inside a 3D scene. It needs to be able to render with different states for things like blending mode and depth function without having to switch shaders. It also needs to be able to take advantage of hardware optimizations like hierarchical Z culling. And sometimes, you need to clip glyphs against some surface that the text has been applied to. Using the conventional vertex/pixel pipeline makes implementation easier because it works like most other objects in the scene. Having this overall design is one of many reasons why a huge swath of the games industry has licensed Slug.
I'm going to dig into it further, but if I understood at a glance, the triangles are there conceptually, but not as triangles the graphics API sees. You compute your own barycentric coordinates in the pixel shader, which means you can loop over multiple triangles/curves within a single invocation of the shader. Sorry if that should've been obvious, but it's the piece I was missing earlier.
I can now concede most of your original point. This seems like a simpler approach than Slug, if you're willing to supersample. Distance-based anti-aliasing remains an advantage of Slug in my view. I understand the limitations of AAA approaches when compared to supersampling, but it can be a wonderful tradeoff in many situations. If you can't afford many supersamples and the artifacts are rare, it's an easy choice.
But for me personally, I'm writing a 4x supersampled 3D software renderer. I like how the supersampling is simple code that kills two birds with one stone: it anti-aliases triangle edges and the textures mapped within those triangles. I want to add support for vector-graphic textures, so your approach from the shadertoy could fit in very nicely with my current project.
But just one final thought on Slug in case anyone actually makes it this deep in the thread: the paper illustrates 27 cases, but many of those are just illustrating how the edge cases can be handled identically to other cases. The implementation only needs to handle 8 cases, and the code can be simple and branchless because you just use an 8-entry lookup table provided in the paper. You only have to think about all those cases if you're interested in why the lookup table works. It's not as intimidating as it looks. Well, I haven't implemented it, but that's my understanding.
For those of you who aren't familiar with Eric's work, he's basically the Fabrice Bellard of computer graphics.
Also thank you to Eric Lengyel, I have had my eye on Slug for a while and wished it was open-source.