Getting it working in linux in ~1999 was really not easy, especially for a teenager with no linux experience.
My networking card wasn't working either, so I had to run to a friend's house for dial-up internet access, searching for help on Altavista.
Very cool project. Way above my head, still!
I believe I tried redhat, but had issues with that as well. I never went back to it--moved to debian and never looked back.
Also had the issue with modem, paging through the manual figured out the initialisation string
AT&FX1
Fun times, now everything is straightforward on Linux but I somehow miss that era when you actually had to do everything by yourself.
I remember really liking the 3dfx splash screen[1] for some reason. Maybe because it was the only thing that actually ran smoothly on that card. But still, I was a loyal 3dfx user - probably because of their marketing which someone else mentioned in the comments - and was sad when it went out of business a couple years later.
Now, here's somebody who's clearly strong on the quantitative side of engineering, but presumably bad at communicating the results in English. I consider both skill sets to be of equal importance, so what right do I have to call them out for using AI to "cheat" at English when I rely on it myself to cover my own lack of math-fu? Is it just that I can conceal my use of leading-edge tools for research and reasoning, while they can't hide their own verbal handicap?
That doesn't sound fair. I would like to adopt a more progressive outlook with regard to this sort of thing, and would encourage others to do the same. This particular article isn't mindless slop and it shouldn't be rejected as such.
Besides all that, before long it won't be possible to call AI writing out anyway. We can get over it now or later. Either way, we'll have to get over it.
Once we're there, we're there. Tree falling in a forest with no one around, etc. Once that happens then I'll stop reacting badly to it, but it hasn't yet (not without careful prompting anyway).
Their previous posts published before ChatGPT seem similar enough. Although, they have way more em dashes and this one has none, almost like they were removed on purpose... lol
I don't know what is real anymore.
If you spend time generating text with LLMs, there is a style that you learn to recognize pretty quickly.
Also, to be clear -- I'm not saying that we shouldn't use LLMs to help us produce the best text/prose we can -- but letting them just generate a lot of the text doesn't led to the best outcome imo.
It was just way harder to program for. Triangles are much simpler to understand than bezier curves after all. And after Microsoft declared that DirectX only supports triangles the NV-1 was immediately dead.
Not really. Forward texture mapping simplifies texture access by making framebuffer access non-linear, reverse texture mapping has the opposite tradeoff. But that is assuming rectangular textures without UV mapping, like the Sega Saturn did; the moment you use UV mapping texture access will be non-linear no matter what. Besides that, forward texture mapping has serious difficulties the moment texture and screen sampling ratios don't match, which is pretty much always.
There is a reason why only the Saturn and the NV-1 used forward texture mapping, and the technology was abandoned afterwards.
They're both beautiful in their own way, the darkness and glow in the hardware versions, some certain pixellated charm and roughness in the software version
Nor did their marketing:
Nvidia was very smart to advertise 16 bit performance _and_ 32bit quality at the same time :)
3dfx were stupid not to include token 32bit output option on Avenger chip (voodoo3). Every voodoo chip since first one has performed blending calculations in full precision and only dropped to dithered 16bit output to save on framebuffer ram, but that ram saving was meaningless by the time 16MB V3 released.
I'm noting down this conetrace for the future though, seems like a useful tool, and they seem to be doing a closed beta of sorts.
This list of registers and their categories are then imported in separate components which sit between incoming writes and the register bank. The advantage is that everything which describes the properties of the registers is in a single file. You don't have to look in three different places to find out how a register behaves.
Wouldn't it be more sensible to have one module for converting the AXI-Lite (I presume?) memory map interface to the specific input format of your processor, and then have the processor pull data from this adaptor when it needs it? That way still all handling of inputs is done in the same place.
Edit: maybe, what it comes down to is: Should the register bank be responsible for storing the state the compute unit is working on, or should the compute unit store that state itself? In my opinion, that responsibility lies with the compute unit. The compute unit shouldn't have to rely on the register bank not changing while its working.
IIRC, it was a gigantic (for the time) beast that barely fit in my chassis - BUT it had great driver support for ppc32/macos9 (which was already on its way out), and actually kept my machine going for longer than it had any right to.
And then, like a month after I bought it, NVidia bought 3dfx and immediately stopped supporting the drivers, leaving me with an extremely performant paperweight when I finally upgraded my machine. Thanks Jensen.
If you want to see what it's supposed to look like, copy the screenshot into GIMP, go into "Color, Levels" and in the "Input Levels" section, there should be a textbox+spinner with a "1.00". Set that to 0.45.
This is definitely fixable in the design though by looking at the DAC gamma register. I'll do so once I get to the scan-out implementation on the DE-10 Nano.
Or does this only run in simulation anyway?
Note, there are oversized hobby Voodoo cards that max out the original ASIC count and memory limits. There are also emulators like 86box that simulate the hardware just fine for old games.
https://www.youtube.com/watch?v=C4295RCp0GQ
>Or does this only run in simulation anyway?
If they are a LLM user, than it is 100% an April fools joke. =3
Note that I also implemented cache components not present in the original Voodoo in order to be more flexible in terms of the memory that can be used. So it could be quite a bit smaller, maybe 50% of the fabric if you got rid of that.
Btw, most 8 MiB vintage Voodoo 2 cards can be upgraded to 12 MiB by simply soldering on more RAM. I managed to snag a bunch of legit 125 MHz chips that work with every card produced.
I'm guessing this isn't fully cycle-accurate, but is it at least somewhat "IPC-accurate"? I'm guessing yes? But much of that was also derived from Voodoo's (for the time) crazy high memory bandwidth AFAIK.