By "unusual," I literally mean "not usual/not typical." Not "never happens."
Companies frequently put egress network policies in place that confine certain protocols like SSH and HTTP to certain ports. They do this in order to achieve compliance with regulations, to achieve security or operational certifications, or simply because they're paranoid. It's not necessarily the least restrictive means of accomplishing their goals, but that's what they do. And if they're big enough, they're going to use the size of the deal and their brand equity to persuade their vendors, who might ordinarily prefer to offer a service on a nonstandard port, to provide it on the customer's preferred port instead.
If you still don't understand, I'm sorry, but I cannot assist further.
Just because those companies exist, does not mean that their shitty practices have any imapct on real internet connections. If you as a paying ISP customer want to use a custom port or whatever, it is going to work. So you as a developer don't have any restriction (which you don't know anyway beforehand) if you are developing a solution for a problem.
"Middleboxes" is a hackernews meme that is thrown around because people here work at places who restrict stuff and they can't bother to change that situation but instead complain about it.
The fact that games exist and they use all kind of ports is proof that this is not a problem for normal networks.
Like, I understand the really restrictive ones that only allow web browsing. But why allow outgoing ssh to port 22 but not other ports? Especially when port 22 is arguably the least secure option. At that point let people connect to any port except for a small blacklist.
Maybe https is routed through a monitoring proxy, but in the situation of allowing ssh the ssh wouldn't be going though one. So I still don't see the point of restricting outgoing ports on a machine that's allowed to ssh out.
This could also have been solved by requiring users to customize their SSH config (coder does this once per machine, and it applies to all workspaces), but I guess the exe.dev guys are going for a "zero-config, works anywhere" experience.
The port issue is also boringly practical. A lot of corp envs treat 22 as blessed and anything else as a ticket, so baking the routing into the name is ugly but I can see why they picked it, even if the protocool should have had a target name from day one.
But yeah, everything is a trade-off.
;; Domain: mydomain.com.
;; SSH running on port 2999 at host 1.2.3.4
;; A Record
vm1928.mydomain.com. 1 IN A 1.2.3.4
;; SRV Record
_ssh._tcp.vm1928.mydomain.com. 1 IN SRV 0 0 2999 vm1928.mydomain.com.
If supported it would result in just being able to do "ssh vm1928.mydomain.com" without having to add "-p 1928"In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:
https://cryptographycaffe.sandboxaq.com/posts/survey-public-....
> where the affected users might be surprised or alarmed to learn that it is possible to link these real-world identities.
I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.
You can try it yourself [0] returns all the keys you send and even shows you your github username if one of the keys is used there.
[0] ssh whoami.filippo.io
"Did you know that ssh sends all your public keys to any server it tries to authenticate to?"
It should be may send, because in the majority of cases it does not in fact send all your public keys.
This is just an awfully designed feature, is all.
Are you AI?
You can wildcard match hosts in ssh config. You generally have less than a dozen of keys and it's not that difficult to manage.
I have the setting to only send that specific host’s identity configured or else I DoS myself with this many keys trying to sign into a computer sitting next to me on my desk through ssh.
Like I can’t imagine complaining about adding 5 lines to a config file whenever you set up a new service to ssh onto. And you can effectively copy and paste 90% of those 5 short lines, just needing to edit the hostname and key file locations.
> I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.
You're probably not the only one for whom it's obvious, but it appears to be not at all obvious to large numbers of users.
A rather niche use-case to promote certificate auth... I'd add the killer-app feature is not having to manage authorized_keys.
Not sure why you need to belittle one example just to add another
See ssh_config and ssh-keygen man-pages...
But what I found, empirically, is that a substantial number of observable SSH public keys are (re)used in way that allows a likely-unintended and unwanted determination of the owner's identities.
This consequence was likely not foreseen when SSH pubkey authentication was first developed 20-30 years ago. Certainly, the use and observability of a massive number of SSH keys on just a single servers (ssh git@github.com) wasn't foreseen.
For example: https://smallstep.com/blog/ssh-vs-x509-certificates/#certifi... you can see here that X11 forwarding is permitted for this certificate, among other things.
Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.
EDIT: Ah, it's probably https://github.com/boldsoftware/sshpiper
will try to remember to look later.
Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.
The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.
What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:
ssh user@vm1.box1.tld becomes: ssh -j jumpusr@box1.tld user@vm1
And just make jumpusr have no host permissions and shell set to only allow ssh.
At that point you run into the problem that SSH doesn't have a host header and write this blog post.
That's one implementation. Another implementation is the proxy looks at the SNI information in the ClientHello and can choose the correct backend using that information _without_ decrypting anything.
Encrypted SNI and ECH requires some coordination, but still doesn't require decryption/trust by the proxy/jumpbox which might be really important if you have a large number of otherwise independent services behind the single address.
For the proxy I did not rely on a “proper” ssh daemon (like openssh), but wrote my own using a go library called gliderlabs/ssh. That in particular allowed me to implement only a tcp forwarding callback [1] , and not provide any shell access on a protocol level. Also made deployment nicer - no need for a full VM, just a container was sufficient.
It is also worth nothing that the -j can be moved into .ssh/config using the ProxyJump option. It does mean end users need a config file - but it does allow typing just a plain ssh command.
[1] https://pkg.go.dev/github.com/gliderlabs/ssh#ForwardedTCPHan...
Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.
Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.
>legacy IP
lol
> they could pay a small extra for a dedicated IPv4 address.
Did you mean that the dedicated IPv4 address to connect via SSH? Then my objection doesn't apply.
With this IPv4 trick, if your employer or university only provides IPv4 you can use the product anyway.
Before someone mentions tunnels: Last time I tried to set up a tunnel Happy Eyeballs didn't work for me at all; almost everything went through the tunnel anyway and I had to deal with non-residential IP space issues and way too much traffic.
One simple way to check if your ISP have some kind of IPv6 netowork is to see if CDN domains given by YouTube and Facebook have AAAA records.
We shouldn't have to ask for ISPs to add IPv6 support but here we are.
Discussions about IPv6 quickly end with "we have enough v4 space and there are no services that require v6 anyway". As long as the extra cruft for v4 support remains free or even supported, large ISPs won't care. We're at the point where people need to deal with things like peer to peer connectivity with two sides behind CGNAT which require dedicated effort to even work.
I know it sucks if none of the ISPs in your area support IPv6 and you're left with suboptimal solutions like tunnels from HE, but I think it's only reasonable all this extra cost or effort becomes visible at some point. Half the world is on v6, legacy v4-only connections are becoming the minority now.
It is also available for one of my phone contracts but not tried enabling it yet.
In 2025, I tried to access my services using IPv6 with 4G phones and different subscriptions (different ISPs), fact is, many (most?) of them did not support IPv6 at all :(
I had to revert to IPv4. And really I have nothing against IPv6, but yeah, as a simple user, self hosting a bunch of services for friends and family: it was simply just not possible to use only IPv6 :(
(for context, the 4G providers are French, in metropolitan France)
We are not running out of IPv4 space because NAT works. The price of IPv4 addresses has been dropping for the last year.
I know this because I just bought another /22 for exe.dev for the exact thing described in this blog post: to get our business customers another 1012 VMs.
I was surprised how low IPv4 prices have gotten. Lowest since at least 2019.
I certainly wish we simply had more addresses. But v4 works.
And it's not clear it will ever be better than it is now with CGNAT on the rise.
Would love to hear I'm wrong about this.
I have a IPv6-only VPN with some personal services. Theoretically, the data can be transported via IPv4, but Android doesn't even query AAAA records if it doesn't have a route for [::]/0. So when I'm not home, I can't reach my VPN servers, because there is supposedly no address.
(I fix it by routing all IPv6 traffic through my VPN. Just routing connectivitycheck may suffice though).
https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
Looks like Canada has roughly 40% adoption, and USA roughly 50% adoption.
IPv6 does not work on the only ISP in my neighborhood that provides gigabit links. I will not build a product I cannot use.
Even when IPv6 is rolled out, it is only tested for consumer links by Happy Eyeballs. Links between DCs are entirely IPv4 even when dual stacked. We just discovered 20 of our machines in an LAX DC have broken IPv6 (because we tried to use Tailscale to move data to them, which defaults to happy eyeballs). Apparently the upstream switch configuration has been broken for months for hundreds of machines and we are the first to notice.
I am a big believer in: first make it work. On the internet today, you first make it work with IPv4. Then you have the luxury of playing with IPv6.
(PS: I use exe.dev quite a lot whenever I want to have a project and basic scripting doesn't work and I want to have a full environment, really thanks for having this product I really appreciate it as someone who has been using it since day one and have recommended/talked about your service in well regards to people :>)
The reason we put so much effort into exposing these publicly is for sharing with a heterogeneous team without imposing a client agent requirement. The web interface should be easy to make public, easy to share with friends with a Google Docs-style link, and ssh should be easy to share with teammates.
That said, nothing wrong with installing tunneling software on the VM, I do it!
Cool.
Somebody else will, and will likely have a better price (due to the abundance of ipv6 addresses) and you’ll go out of business.
> because we tried to use Tailscale to move data to them, which defaults to happy eyeballs
Not gonna lie, to me that reads like “because we don’t know how to use ipv6”
It's similar to "open source is the most secure because it has the most eyeballs on it", but in reality security bugs will exist for years with no one noticing because people vastly overestimate how any developers will actually spend their time analyzing any given open source software.
Sure, bugs are more likely to be caught in open source and it's more likely someone will take your market share with a more efficient and competitively priced product, but you're overblowing the likelihood of both by a large margin.
Well you’re leaving behind both a serious pain point for your users AND you’re leaving in the open a clearly more compute- and money-efficient way to achieve the objective on the table.
It’s literally giving your eventual competitors (because there will be competitors, eventually) a competitive advantage.
Then sure, the market is very wide but… just look at stackoverflow vs chatgpt. As soon as a better alternative came on the market, stackoverflow died to irrelevance within months.
I have seen that port technique used in NAT servers.
So far it feels like only LDAP really makes use of it, at least with the tech I interact with
Overall, DNS features are not always well implemented on most software stack.
A basic example is the fact that DNS resolution actually returns a list of IPs, and the client should be trying them sequentially or in parallel, so that one can be down without impact and annoying TTL propagation issues. Yet, many languages have a std lib giving you back a single IP, or a http client assuming only one, the first.
I also know of https://github.com/Crosse/sshsrv and other tricks
I agree more SRV records would have helped with a tremendous number of unnecessary proxies and wasted heat energy from unnecessary computing, but in this day and age, I think ECH/ESNI-type functions should be considered for _every_ new protocol.
You can front a TLS server on port 443 and then redirect without decrypting the connection based on the SNI name to your final destination host.
Provided your users will configure something a little - or you provide a wrapping command - you can setup the tunneling for them.
Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.
I'd love to learn more about how you solved it and what I may be mistaken about.
>with SSH server
My comment was about how you do not need an ssh server. The idea of a server exposing a command line that allows potentially anything to be done is not necessary in order to manage and monitor a server.
I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.
But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.
When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.
Been using this for a few years and no problems so far.
It's a nice solution but I've been looking for something more transparent (getting them to configure an SSH key is already difficult for them). A reverse proxy that selects backend based solely on the SSH key fingerprint would be ideal
That and ProxyJump both also require the container-host to negotiate ssh connections, which is... fine, I guess? But the port knocking approach means that the only thing the container-host is doing is port forwarding, which gives it like half an extra point in my calculus.
One similar example of SSH related UX design is Github. We mostly take the git clone git@github.com/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.
Take a look at this repo: https://github.com/mrhaoxx/OpenNG
It allows you to connect multiple hosts using the same IP, for example:
ssh alice+hostA@example.com -> hostA
ssh alice+hostB@example.com -> hostB
Still, this is the best zero-config solution in my opinion, much simpler than the solution they decided to go with.
`ssh name`
Even less things to remember + you have documented your hostnames in the process.
[0]: https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
I also know how to use SRV records so this is a non-issue for me and everyone I work with.
Not exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.
https://www.haproxy.com/blog/route-ssh-connections-with-hapr...
from the blog
> Did you know that you can proxy SSH connections through HAProxy and route based on hostname?
Good write up of a tricky problem, and glad to real-world validate the solution I was considering.
1. Client side: ProxyJump, by far the easiest
2. Server side: use ForceCommand, either from within sshd_config or .ssh/authorized_keys, based on username or group, and forward the connection that way. I wrote a blogpost about this back in 2012 and I assume this still mostly works, but it probably has some escaping issues that need to be addressed: https://blog.melnib.one/2012/06/12/ssh-gateway-shenanigans/
> unexpected-behaviour.exe.dev
That is not a URL, that's a fully qualified domain name (FQDN), often referred to as just 'hostname'.
I even do this despite having a small range of routable IPv4s pointing at home, so I don't really need to most of the time. And as an obscurity measure the jump/bastion host can only be contacted by certain external hosts too, though this does still leave my laptop as a potential single point of security failure (and of course adds latency) and one or any bot trying to get in needs to jump through a few hoops to do so.
Setting it up like this where you just assume:
> The public key tells us the user, and the {user, IP} tuple uniquely identifies the VM they are connecting to.
Seems like begging for future architectural problems.
Whereas matching on user+ip is a one-time proxy install.
SSH cannot multiplex to different servers on the same host:port. But you can use multiple ports and forwarding.
You could give each machine a port number instead of a host name:
ssh-proxy:10001
ssh-proxy:10002
When you ssh to "ssh-proxy:10002" ("ssh -p 10002 ssh-proxy" wth your OpenSSH client that doesn't take host:port, sigh), it forwards that to wherever the 10002 machine currently is.It would be interesting to know why they rejected the port number solution, but the only hit for "port" in the article is in the middle of the word "important" in the sentence:
But uniform, predictable domain name behavior is important to us, so we took the time to build this for exe.dev.
You can have uniform, predictable domain + port behavior. Then you don't need a smart proxy which routes connections based on identities like public keys. Just manipulation of standard port forwarding (e.g. iptables).