non-wikipedia garbage site reference: https://sg.news.yahoo.com/viral-everyone-12-now-theory-19314...
GP used intentionally hostile and weird interpretation of "if you intend to subdue, enslave or kill, don't use it", aimed at dictatorships, organisations like Palantir, etc.
They’d only agree in the abstract. As soon as you name the oppressive regime you’re fighting against, suddenly a huge chunk of those people will come up with reasons to no longer support you.
However there's a chance apartheid and authoritarian countries would not use it exactly because of this.
I don't think they will care.
B) In any case, I'm OK with it. Having the software explicitly licensed like this may prevent it from being legally considered a terrorism tool or munition if a bad actor were to be found connected with it, and if that happens, that's going to have much more freedom-restricting consequences with respect to the software.
>Not willing to violate the license of a software package.
It is, however, interesting on principle, since it only allows the use by criminals (implicitly), and not by law enforcement. By then making the tool very impractical to use, we can punish bad actors still.
(I think there was a honeypot operation to this effect, something with feds making up a "secure encrypted phone" and then acquiring Cartels as a major customer.)
(On the off chance I just burned this very similar operation: dear feds, I'm so sorry!)
So presumably, by the extension of your argument, average person using Reticulum is either ("implicitly") a criminal or breaking the licence / ToS.
Where do you see it?
so explain to me how the license is going to be enforced?
This is an example of the HN "Jump to Conclusions Mat" where there is an instant jump to extremely high level politics and philosophy and skipping over the more practical mundane problems.
A more practical issue is the author has zero interest in being sued if my LoRA connected emergency stop button for my CNC milling machine crashes and the machine then hurts someone (possibly myself).
Or my "emergency alert" transponder fails when I'm in the wilderness and someone (maybe me) dies instead of being rescued.
The wildest part of the story which isn't being covered is this is an example of one guy doing all the work to produce something more capable than the entire meshtastic project in about a year. A real life example of 10x or 100x engineers. How can meshtastic accomplish so little if one guy accomplished so much? Historically it was not THAT bad where having more than one person work on a network protocol never killed progress for decnet or banyan vines or SNA or any other old time protocol, but maybe its a mesh network thing that having more than one cook in the kitchen eliminates all progress.
Unfortunately, being a pretty much 1 person project he doesn't have the legal skills to realize the license as written is awful and needs rewriting to achieve his goals, assuming his goals are even a good idea...
I've set this up and used it on my LAN at home. Its a LOT more than just LoRA or just meshtastic and its pretty cool and works well. The app on my phone works well. Being abandonware I'm shutting it down "when I get around to it". The ratio of Meshcore to Meshtastic users/traffic is around 20:1 in my area so I'll be setting up Meshcore to fit in. Mesh LoRA is very local just like cell phone service; I'm well aware there are parts of the world operating at an opposite ratio of popularity where you "have to" use meshtastic to fit in. That is not where I live so I must use meshcore.
Meshtastic isn't used here, so I can't mesh so cross that off. Reticulum works perfectly and is abandonware so cross that off. Meshcore has its ... interesting pay money to unlock features scheme, I can't decide if I like or dislike that, I'd like to cross that off but its the only remaining protocol. I could write my own and GPLv2 it but if a superior system (reticulum) can't get buy in, my better licensed system would also be unused. I think I am stuck having to use Meshcore, I about 95% like that and 5% dislike that.
I do find it amusing that I used ham radio AX.25 packet radio in the late 80s, early 90s, some times this century, I know all about digipeating problems and hidden transmitter problems and all the stuff the "kids" refuse to do a literature search for and seem surprised when it bites them. Really this mesh stuff is just ham packet radio from 1981 except the total cost of a station is like $15 vs back in the day it was oh at least $1000. I had a node running linux AX25 back in the 90s and I'm sure I had a couple thousand bucks in equipment by the time I was done, mostly repurposed later on. I still have several hardware TNCs in some closet or shelf somewhere...
Edit: looks like the Reticulum Manual might have some more technical details. https://github.com/markqvist/Reticulum/blob/master/docs/Reti...
Planetary-scale networks is mentioned as a design goal on the first page of the docs https://reticulum.network/ which are hidden at the very top of the git repo.
This is a big turn off for me. I have seen it for a number of protocols beyond mesh ones. ESP-Hosted does this too. So does ELRS. Maybe I'm too used to reading data sheets etc, but if your protocol requires a specific implementation, I am put off by the friction: I must integrate your software, in the language you used, and will likely hit compatibility problems as a result.
I am certain the popularity of Meshtastic is down to how easy they have made it to onboard. Buy the module, flash using the web flasher, install the app on your phone, done. There's a Youtube tutorial on every street corner for this, even though I (and seemingly many people) don't find Meshtastic to be all that reliable.
For reference, this is what Meshtastic has to say about their flood-based mesh protocol: https://meshtastic.org/docs/overview/mesh-algo/
So, for instance, at the URL you referenced, it says at the bottom:
> As meshes grow larger and traffic becomes more contentious, the firmware will increase these intervals. This is in addition to duty cycle, channel, and air-time utilization throttling.
> Starting with version 2.4.0, the firmware will scale back Telemetry, Position, and other ancillary port traffic for meshes larger than 40 nodes (nodes seen in the past 2 hours) using the following algorithm:
> ScaledInterval = Interval * (1.0 + ((NumberOfOnlineNodes - 40) * 0.075))
> For example an active mesh of 62 nodes would scale back telemetry.device_update_interval to 79.5 minutes instead of the 30 minute default.
It looks like they are already building back-off strategies as the net scales, and that starts to happen at very low node counts (just 40). So, what happens when node counts hit 500 or 1000? Again, not trying to throw stones; just trying to understand how far these protocols can go and how they degrade/fail as they scale. Ideally, they don’t fall over and even possibly get more robust (with more nodes, there are typically more topological connections between nodes, which provides more possible paths and resiliency).
"Reticulum does not include source addresses on any packets" and with that you cannot throttle passing-through traffic based on source. Any hope of scaling is gone.
The deep brokenness of the current internet, specifically what has become the "cloud" is something I've been thinking about a lot over the past few years. (now I'm working on trying to solve some of this - well, at least build alternatives for people).
and this:
> The way you build a system determines how it will be used. If you build a system optimized for mass surveillance, you will get a panopticon. If you build a system optimized for centralized control, you will get a dictatorship. If you build a system optimized for extraction, you will get a parasite.
Seems to be implying (as well as in other places) that this was all coordinated or planned in some way, but I've looked into how it came to be this way and I grew up with it, and for me, I think a lot of it stemmed from good intentions (the ethos that information should be free, etc.)
I made a short video recently on how we got to a centralized and broken internet, so here's a shameless plug if anyone is interested: https://youtu.be/4fYSTvOPHQs
Too bad nobody wrote a book called “the mythical man month” to dispel the majority of fantasies that engineers have about the way the world works
I'm not sure specifically e.g. why being an engineer would put someone at an outsized disadvantage against the already hopeless notion of "understanding how the world works [in its totality?]".
One would think being smart and educated would put them ahead of the pack, even if they overestimate how smart and educated they are compared to others, or fall victim to the consequences of that - an accusation engineers commonly recieve on social media, with similarly high suggestiveness, and similarly little substantiation.
If creative people don’t think at a systems level or a political intersectional level when doing design then they will completely ignore or miss the fact that engineering is a subset of a political or otherwise organizational goal
The key problem with most engineers is that they don’t believe that they live inside a political system
Or do you mean that to you it all reads as yet another case of someone thinking their technology is what's going to right the ship that is society's current trajectory, then bailed when that didn't come to be? Cause while I can certainly see that being the case, I'd say such a cycle is as much desperation as it it naivety. I think this is even reflected in it being a PHY-agnostic thing, meaning as far as an effort into anything goes, it's a fairly enduring one.
Couldn’t have said it better myself
Desperation is just a manifestation of manic ignorance unfortunately
The only solution to ignorance is education and I’ll go back to my original point which is this precise thing was discussed over and over and in detail over the last half century of computing in multiple places
Most notably one of the most popular well distributed books that discusses this explicitly is Rodney Brooks’ mythical man month
So my original critique is that engineers do not even utilize the core literature for which there is global consensus on these problems
I think this betrays a severe misunderstanding of what the internet is. It is the most resilient computer network by a long shot, far more so than any of these toy meshes. For starters, none of them even manage to make any intercontinental connections except when themselves using the internet as their substrate.
Now of course, if you put all your stuff in a single organization's "cloud", you don't get to benefit from all that resilience. That sort of fragile architecture is rightly criticized but this falls flat as a criticism of the internet itself.
If you hand individuals or groups the internet, they will naturally use it for spam, advertisement, scams, information harvesting, propaganda, etc - because those are what gain them the most.
The 'enshittification' if the internet was inevitable the moment it came into existence, and is the result of the decision of its users just as much as any one central authority.
If you let people communicate with each other on a large scale at high speeds, that's what you get.
The only way to avoid the problem is to make a system that has some combination of the following:
* No one uses
* Is slow
* Is cumbersome to use
* Has significant barriers to entry
* Is feature-poor
In a such a system, there's little incentive to have the same bad behaviors.
There is nothing inherent about fast, large-scale, or user-friendly communication that forces spam, scams, or propaganda. Its just that those outcomes emerge when things like engagement, attention, or "reach" are rewarded without being aligned to quality, truth, or mutual cooperation.
This is a well-studied problem in economics, but also behavioral science and psychology: change the incentive and feedback structure, and behavior reliably changes.
Based on the studies I've read in and around this topic, I think harmful dynamics are not inevitable properties of communication, but really contingent on how each system rewards actions taken by participants. The solution is not slowness or barriers, but better incentive alignment and feedback loops.
Reticulum is actually ahead of the curve by having a ready to use PDF manual you can download. For my part, I've been trying to put together an all-inclusive Raspberry Pi image or a live USB for Meshtastic, but it's not quite there yet (it's no more than a hobby for me, but I'm not making big off-grid promises either).
I like to liken it to my other hobby of retrocomputing. In the old days, your whole OS and all the applications ran from a few floppies and a couple of books for documentation. If you need to duplicate the environment, just make copies of your disks. And of course you need an original set to start with. But nobody thinks of that as "offline", that's just the normal way it works, and yet it seems more offline than modern projects who claim to be offline.
In particular, it seems obvious to me that any preparedness plan that requires a user to acquire in advance specialized hardware (eg. a battery/solar-powered long-range radio of some kind) to be used with an off-grid network can reasonably expect that user to also be prepared with the software to drive that hardware.
As with many hobbies, this is a "just because I can, I will" type of thing.
That said, I picked up a couple of prebuilt lora solar nodes and a couple of mobile nodes (seed solar jobies and seeed mobile jobies) and stuck the solar ones into my upper story windows just over new years, one is set up as a meshtastic repeater the other as a meshcore repeater.
I'm pretty amazed at the distances I hear from, I'm getting stuff this morning over meshcore all the way from vancouver bc into my office in seattle (pugetnet.org).
To get it all dialed in having a discord full of old HAM guys that know RF pretty well certainly doesn't hurt.
It's certainly hobbiest grade at best. It seems like it could be very interesting for installs in small communities and larger estates for backhaul for remote iot applications. Obviously you aren't going to push video over that bandwidth but for weather stations and the like seems cool.
Reticulum becomes more interesting when you are talking about some of the more robust radio technologies. Building a mesh LAN out of old wifi gear is interesting in concept.
https://unsigned.io/rnode_bootstrap_console/
Shit's insanely well thought out! I encourage everyone to dive in a bit. It's pure tech porn. (If you can endure the occasional Ayn Rand quote lol.)
It quite smells like the hacker spirit of the 80s, mixed with a little spiritualism and anarchism. Very refreshing after so many other people are just disillusioned, worn out, angry, or frightened.
> To break free of the center, you must also let go of the concept of the "Address".
When I was still dealing primarily with on-prem networks in regulated environments (or cloud networks stubbornly architected in a fashion similar to on-prem ones) I worked with a lot of people that could not and would not ever understand this. It's not just a cloud thing. Some people just cling to using IP addresses for everything all the time. They don't understand why trying to access the JIRA server via IP wouldn't work because they didn't understand SNI let alone a Host Header. Dynamic record registration and default suffix settings are nothing more than a section of settings to be cruised over during clicked-in configuration. Zones can and should be split without regard for architecture or usage. Et cetera.
My theory is that because these people didn't understand Layer 7 stuff like HTTP or DNS they just fall back to what they can look at in a console (Cisco ASA, AWS, or otherwise). IPv6 will simplify a lot of the NAT stuff but it won't cure these people of using network addresses as a crutch. Not really sure what the systemic solution is - I was like this once but was fortunate enough to be task with migrating a set of BIND servers to the cloud and so learned DNS by the seat of my pants. Maybe certification exams should emphasize this aspect of networking more.
Oh I guess that falls under packet radio I see
500 Internal Server Error