• MitPitt 4 hours ago |
    A raspberry zero is more powerful than an enterprise server from the 1990s. A minimalist static website is not impressive. You can fit way more in there.
    • alfanick 4 hours ago |
      Hey, it loads! Unlike ~10% of pages on first page of HN, hugged to death.
      • raddan 3 hours ago |
        Also I love the dithered B&W images. The entire aesthetic of the site is great.
    • vablings 3 hours ago |
      The website running on the vape was far more interesting than this. I do wonder if anyone has tried to use the microphone in these devices to listen to audio. Backdoored vape
    • raddan 3 hours ago |
      I hosted my personal email domain on a Zero for almost 10 years. It had about the same capability as the very expensive (and large) Win NT4 machine we used for our 80-person organization when I started my career in tech. I eventually replaced the Zero with a Raspberry Pi 4, primarily because the Zero’s IO ports are annoying (eg, USB is not hot-pluggable!) An RPi 4 is extreme overkill for personal email but it still idles under 1W and when it fails I can replace the entire machine for next to nothing.

      The point of failure for all of these machines has been the SD card. They seem to last 4 years almost to the day. I suppose if I set up a RAMdisk they might last longer, but honestly, for the price of an SD card it’s not really worth my time.

      • colechristensen 2 hours ago |
        >The point of failure for all of these machines has been the SD card. They seem to last 4 years almost to the day. I suppose if I set up a RAMdisk they might last longer, but honestly, for the price of an SD card it’s not really worth my time.

        There are "Industrial" SD cards which should last considerably longer, you can look up a few people have done their own testing. They can be slower but that shouldn't be a blocker for an email server on a pi.

      • ianburrell 2 hours ago |
        They make high endurance microSD cards that can handle a lot more writes before failing.

        OTOH, I corrupted a card by turning off the Pi in middle of writing.

      • tracker1 2 hours ago |
        I remember in the mid-late 90's how poorly the Exchange server ran that there was a nix server for inbound email just to throttle the ingress. When it was upgraded to a 4-socket server, there was concern when the *nix guys just let everything that was being held during the upgrade through, and it just chugged along. But the moment of panic was palpable. The Unix guys really didn't like that business internals and apps were running from Windows services, so thought it would be funny to try to knock over the new mail server.

        Today, you can run mailcow/mailu with all the options on a relatively modest vps. I'm on a cable provider that locks down residential customers and charges over 2x as much for business, so it's cheaper to use VPSes.

        On RPi, I've mostly opted to use SSD + USB Adapters as they've been significantly more reliable that SD. There's lots of cases that make this configuration a breeze. That said, I've mostly been running Mini PCs since COVID when the RPi got to be more expensive all-in and slower.

      • girishso 2 hours ago |
        Interesting, what tools did you use for email hosting?

        I’m scared of self hosting a mailbox.

        • amtamt an hour ago |
          https://www.xmox.nl/ is pretty good single binary mail solution for personal email hosting, it not too many features of modern webmails are needed.
        • lostapathy an hour ago |
          Self hosting a mailbox is easy - getting email back out is the hard part.
        • abdullahkhalids 36 minutes ago |
          Have been using Mail-in-a-box [1] for about 5 years [1]. I haven't done any maintenance for at least 3 years, besides occasionally clicking restart in the admin webpanel every time it does serious security updates.

          I don't send a lot of emails from it, but the ones I do are delivered.

          [1] https://mailinabox.email/

    • Terr_ 2 hours ago |
      Indeed, you can even permanently run one using a balcony solar panel:

      https://solar.lowtechmagazine.com/about/the-solar-website/

    • static_motion an hour ago |
      My thoughts exactly. People regularly run Pi-Hole on these things, which not only is "serving a website" (the dashboard) but is also being a DNS server.
    • stkdump 33 minutes ago |
      I am serving a small web interface to control my shutters on an esp32. I even did the experiment to not parse the request and just always respond with the same response, so a webserver for a single page can be trivial (you would have embed images and all other resources into the html then). But of course I am parsing the request, because I need separate routes for the page and for the actions. Since this is on my home lan it doesn't even need ssl. I guess as long as the traffic is low, an esp32 might be able to do ssl. For me that isn't relevant because it isn't on the internet and when I want to connect to it from outside my home lan, I just use wireguard.
  • sphars 3 hours ago |
    The OP link is not to Pi zero website, here's the actual website that's being hosted on the Raspberry Pi:

    https://zero.btxx.org/

  • c0nsumer 3 hours ago |
    This feels a little weird because while they are running the website itself (HTTP) off the Pi, they are handing off all TLS to a cloud provider.

    So while the content is in RAM on the Pi, a lot of the heavier lifting (TLS termination) is done elsewhere, which saves a ton of CPU load on the Pi.

    • ironhaven 3 hours ago |
      Sometimes these demos enable caching on the reverse proxy. So then for these tiny demo html pages you request, you may not even reach the fun tiny computer it is supposed to demonstrate.
    • spijdar 3 hours ago |
      Yeah, I've seen this in more than a few places. There was a blog "running on a Wii" that, IIRC, was doing the same thing.

      On the one hand I get it, TLS is pretty heavy, and it makes sense to take advantage of a VPS or Cloudflare or however you want to do it.

      But once you are spinning up a VPS, the question is ... why the Pi? The VPS in the article has less RAM, but more storage. If you're already doing TLS termination on the VPS (the most RAM intensive part), you might as well just do the whole shebang there.

      I know this is all for fun, I'm just wondering -- is the Pi Zero really too slow to handle TLS, especially with an optimized TLS library? In this setup, the Pi is already being directly exposed to the Internet anyway, there's no VPN being used. That ARM11 isn't "fast", but surely a 1 GHz ARM11 can handle an optimized TLS library serving some subset of TLS1.2.

      • indigodaddy 2 hours ago |
        The TLS termination isn't actually on the VPS. The article details that Tierhive has an haproxy edge service (handling the TLS), that then has the vps as the backend, but that vps is just doing tcp proxying with socat to the ddns exposed home server fqdn. Feels like a lot of unnecessary loops. Kinda fun I guess but, just, why
        • Antirust3743 10 minutes ago |
          Yes it is, "we plan to use our external VPS for handling the TLS termination"
    • wang_li 3 hours ago |
      It is more than a little weird. A pi zero is more than capable of handling HTTP/1.2 and TLS 1.3 for a handful of connections per second. This machine is 10x what we were running web servers on in the '90s.

      Also, all web pages are served from RAM. It's automatic that modern OSes will cache this stuff on first access.

      • joe_mamba 2 hours ago |
        >This machine is 10x what we were running web servers on in the '90s.

        Kind of irrelevant since operating systems and web pages in the 90's were significantly smaller in footprints, as the web was mostly plain text back then. Windows XP with its GUI would run Max Payne on 128MB of RAM. You could do a lot more back then that You can't do modern stuff like that today with 128MB of RAM.

        • huijzer 2 hours ago |
          You can host such sites perfectly well nowadays. I’ve often served hand-written HTML pages of only few lines
          • lq9AJ8yrfs 2 hours ago |
            LLMs, including open ones, are really good at this it turns out. It stands to reason, there is tons of training material out there no doubt they have consumed and are ready to regurgitate.

            Yesterday I one-shotted several interactive pages, that Qwen built out of straight HTML and Javascript. I handed it my API (source code, not even a swagger, via an MCP that Qwen wrote for me), asked for a frontend, and it delivered. One page at a time to keep context down, and mightve gotten lucky on the first draw but after the first one I told it to make the next ones like the first.

            Can't say I've had that experience with backend languages & frameworks, incl writing that same API, but perhaps I'm off the beaten path with those, or perhaps there's greater breadth of things to do vs a narrower set of acceptance criteria? IDK.

            Here I was sweating that I'd have to research and learn a current-day frontend framework. It felt like a magic wand using consumer-grade AI. HTML and plain old Javascript was plenty.

            Tangent but apropos of other contemporary threads on HN, it puts a spin on supply chain threats. There's no NPM or anything, except perhaps whatever mysteries are baked into the model.

        • j45 2 hours ago |
          The contents of webpages are largely the same.

          HTML code, CSS, Javascript, Images.

          In this case, they are static elements, which can even be cached locally to share more easily.

          If someone wants a massive build system to render a static HTML page, that's on them, and their personal interpretation. Increasingly, and maybe more often than not, there is more than one way to get the same outcome.

          The fact that there's hundreds of downloads for a single web page is up to the constructor of that page. Still, these things can be reasonably cached. For example, host it on the Pi, then put a cloudflare in front of it or something.

          The Pi Zero might not be for you, or easy to try to undermine. Which criticisms would go away if it was on a regular pi?

          • tracker1 2 hours ago |
            Even then... it's usually built before it's deployed n the server.. the server is still delivering text, css, js, images and images have always been pretty large. So your connection is tied up for a little bit longer... and as content was smaller in the 90's, connections themselves are much faster today... in the 90's you were lucky to be hosting on a T1 or faster and clients on modems. Today, you've likely got between 100mb to 2gb uplink on your home connections, let alone business connections that generally start at 1gb. 600x the bandwidth for the server from a T1
      • amatecha an hour ago |
        Yeah, I ran a phpbb forum (alongside my normal static site) on a 486 in 2003 or so. It worked. It was slow, but it worked just fine for my friends and I! I remember it took multiple minutes to generate the SSH server key after the initial install lol
        • mercutio2 17 minutes ago |
          A 486 in 2003? Pentiums were shipping by the mid-90s, did you just have super old hardware lying around?

          I retired my 486 in ‘95 or thereabouts…

      • walrus01 10 minutes ago |
        Anyone remember 32 bit/33 MHz PCI slot SSL accelerator cards? As I recall openbsd had kernel driver support for several
    • allthetime 3 hours ago |
      I wouldn’t consider “the way most people do TLS in 2026” weird. That said this isn’t all that impressive or interesting, a computer… serving a website.
      • Antirust3743 2 hours ago |
        Is sending plaintext traffic over the open Internet "the way most people do TLS in 2026"? Am I missing something from the post?
        • tracker1 2 hours ago |
          Many (most?) are hosting web applications and/or content in separate applications and sometimes servers from where TLS (HTTPS) termination happens. HAProxy, Traefik, Caddy and Nginx as reverse proxy and TLS termination servers are pretty common, even more so if you're containerizing your applications themselves. It dramatically simplifies the application stack.

          While I may make the argument that most are probably hosting and doing php on the same server, it's not the typical approach for any custom software at this point.

  • jcalvinowens 3 hours ago |
    I have a self hosting Pi Zero W running Gentoo. It started as a joke, but I kept it because it's actually occasionally useful for testing new kernel releases.

    I found a fun bug with it a couple years ago: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

    It is still able to build software faster than it is released. It takes roughly a month to recompile the entire system :D

    • colechristensen 2 hours ago |
      I self host some CI runners and do kernel work on a Pi writing some software defined radio things.

      For the radio stuff I can just take the Pi, frontend, and a battery pack outside to test.

      When I finally move to a place with proper fiber internet I'm going to be hosting several side projects on a handful of Pis.

  • jcgrillo 3 hours ago |
    After seeing what new R-Pi stuff is selling for I went rummaging in the parts drawer and found the following:

    - R-Pi Zero W

    - Sixfab UPS hat

    - Sixfab Cellular IoT App Shield

    - R-Pi model 1B

    With all this I should be able to make a multiply redundant always-on bastion host. It's awesome that alpine supports the armhf stuff, many OSes have dropped 32bit support entirely.

    • giobox 3 hours ago |
      In the good old days a decade or so ago where the full fat Pi board was always 35 dollars and the zero was just 5, they were so cheap as to be practically disposable. I have an insane number of Pi 3/4 and Zero/ZeroW boards in projects and drawers around the house, but this has massively tapered off as prices have gone up. At one point I had an 8 pi 3 cluster to learn kubernetes/container orchestration techniques on - completely unnecessary, but building the little rack was 85% of the fun. That cluster ran my home stack for years (DNS, home automation, network admin UI etc).

      I've since got a lot more interested in the microcontroller community - so many Pi projects should really be microcontroller projects - the esp32 especially scratches the itch for cheap things to hack on, and you can get them for like 6-7 bucks each with wifi.

      • jcgrillo 3 hours ago |
        Yeah I've been using an ESP32-C6 for the latest wifi connected project I'm working on. The RP2040 and RP2350 look interesting too, I have a couple of them but haven't really done much with them.
    • vinc 2 hours ago |
      I assembled a solar server with those parts laying around last year:

      - Victron Monocrystalline Panel 90W 12V

      - Victron Gel Battery 12V 60Ah

      - Victron MPPT Charge Controller 75V 15A

      - Raspberry Pi Zero W

      - Witty Pi 5

      - Sixfab 4G/LTE Base HAT

      - Quectel EC25 Mini PCle 4G/LTE Module

      Almost 100% uptime except for a few days after a bad winter storm, pretty neat!

  • _stiofan 3 hours ago |
    The pi zero's are great. I have a bunch of them. I used to use them as a tiny server for live webcams streaming to YouTube for customers, but YouTube now have a minimum sub count before you can go live, which sucks. These boards are pretty powerful.
    • bsoles 2 hours ago |
      I have never been able to stream video from a raspberry pi zero's official camera. What tools/software were you using?
      • Multiplayer 41 minutes ago |
        I'm using an 8MP camera from freenove on a pi zero 2 - it's great.
  • fdjafhdasfjhds 3 hours ago |
    RAM? In this economy?!
  • Venn1 3 hours ago |
    They are powerful little devices. I used a Pi Zero 2 with an ethernet adapter to host an x86 TrackMania² server using BOX64 and it never had a problem. Only swapped it out recently because I needed the Zero 2 for another project.
  • wolvoleo 3 hours ago |
    Umm some people run a website on a conmodore 64. That's impressive.

    A Raspberry Pi Zero can just run apache.

  • seemaze 3 hours ago |
    I've been using Raspberry Pi Zeros for cheap little linux appliances since they were released. Boot them up with the latest Alpine Linux and run the whole thing from ram. You can also remove a card from a running machine with no ill affect, and they easily survive power cuts. I've never had a card fail.
  • vednig 2 hours ago |
    we're running a complete production grade cloud storage service with Raspberry Pi Zeros at https://getcloud.doshare.me that's how powerful Rpi hardware we've tested it for upto 10k concurrent requests with storage ofcourse, but still too far powerful
  • orliesaurus 2 hours ago |
    So what benchmarks did you run or what's the advantage? Might as well just run the site on the VPS at this point since you're paying for it?
  • sam_lowry_ 2 hours ago |
    tell OP about tftp
  • starik36 2 hours ago |
    I have several of these running all sorts of quickie utilities. The key for making things faster (at least for my tasks) was to write everything I need in c#.

    For whatever reason, the speed seems far faster than Python for me.

  • doginasuit an hour ago |
    For optimal moral support, have one of the spare Pis holding a sign, maybe "Pi is our guy"
  • slow_typist an hour ago |
    Instead of having an open port in my router and sending data in plain text, I would use an ssh tunnel or a vpn. Or probably put the entire web site on the VPS.
  • basilikum 25 minutes ago |
    The Pi Zero has 512MB RAM and a one GIGA Hertz CPU. It's a fucking super computer. Maybe not today, but not that long ago and back then people were running much more intensive things on them than hosting a website. It should be perfectly capable of handling TLS. AES might be a bit haeavy without hardware acceleration, but you can also do only ChaCha20 as the single supported server cipher. It would be easy to DDOS, but you should be able to mostly address that with firewall rules rate limiting connection attempts upstream.

    I don't mean to shit on this, exploration is nice and putting perfectly fitting hardware to use instead of throwing abundant unnecessary hardware on every simple problem — just to bring it to crawl with loads of shitty bloates software — is good, but it's not particularly impressive.