The solution I came up with back in 2005 was a sharded Oracle 10g database, storing 1 read per row (controversial!) and the biggie - using Oracle Advanced Queuing to run worker jobs that could scale independently (no. of processes per queue) with a design that was basically a queue per business action (e.g. send data, aggregate consumption etc) and worker processes designed to do 1 thing - read a message off the queue (1 msg = 1 unit of work), and do its thing (if successful, write messages to other queues for downstream activity).
This all ran on a single piece of Sun hardware running Solaris, with the worker processes being either C++ applets or PL/SQL stored procs. Processed 48 million rows per-day, stored about 18 billion rows in the active table. Ran for about 10 years before being decommissioned for COTS solution.
My question is: how would such a solution be architected today? What design, what technologies etc. I am interested for my own learning (I've been working away from the pointy end of an IDE for quite some time).
Addendum: probably the only thing I wished we had done was use Oracle RAC. Instead we opted for an active-passive topology (annoyingly wasteful of an 50% of our compute).
Feel free to ask clarifying questions.
Working data size (1 day): (8B meter id + 8B timestamp + 8B value) * 48 * 1e6 = 1.1 gigabyte
Archival: 1.1GB/day * 15 month = 482 GB .. but looks like mostly write-only data?
That is pretty small as far as modern servers go. My usual approach for IoT things is event sourcing-like architecture - you have logging servers (for disaster recovery) and processing server(s), which keep things in RAM. If processing servers crash, they restart, and logging servers re-send data from last checkpoint.
But I am sure that a Postgres can be used as well, or some sort of time-series database too. Based on your description, you really don't need much as far as database go - basically range queries and that's it.
Later spinning harddrives were replaced by SSD, then NVME. More open source NoSQL and columnular storage solution. Cloud services started offering hosted databases with unlimited scale. 500GB would all be 'hot' ready to be queried in realtime.
Today I'd see three options
* multiple cloud servers which receive the data and put it into a managed cloud database, like Google BigQuery. They'll handle all scale, including region replication, backups. You might overpay but likely still less than Oracle software licence.
* specialist SaaS for IoT, for example ClickHouse. They can handle 10.000 incoming rows per second. The data store later does defragmentation, storing data by date and other optimizations which make it faster to query recent data, vs older data.
* place it into JSON or CSV files, one per hour, or one day and query with DuckDB.
500 Internal Server Error