When I first started encoding MP3s I used a 128kbps rate which is noticeably inferior to the original CD. I noticed this in the early 2000s when I would up listening to a CD of some music I usually listened to as a 128kbps MP3 and was blown away with how much more I heard.
I'd say that 192kbps is much better and the 320kbps that the author advocates is basically transparent.
But sadly today most popular music is ruined beyond repair with dynamic compression, not data compression. The craven stupidity of the loudness war may be unequaled in the history of art, and yet even the artists often don't seem to understand what the problem is. You see legendary artists complaining about modern sound quality (Dylan, Neil Young, and so forth) but then cheerleading for absurd sampling rates and bit depth. NO. That isn't the problem. I have 45-RPM records that sound better than their "lossless," "remastered" incarnations on streaming services.
The biggest problem in popular music (and I would say this probably pervades everything but classical at this point) is dynamic compression.
Not going to argue with you regarding dynamic compression, but after backing away from the worst excesses of the volume wars by mastering engineers in the mid '00s, things are sounding better to my ears. Dynamic compression can sound good (even in the extreme) if done for artistic effect. Like here's Beck's Ramona where the drums & cymbals have the tar squashed out of them with serious limiting, which to my ears nicely tames the sonics of Joey Waronker's spirited performance, while fitting well dynamically into the rest of the song. https://www.youtube.com/watch?v=e3yZ9OVjzbE
That said, maybe the engineers responsible for some of the worst dynamic squashing could be pressed into TV/film audio service where in 2026, there are still extreme volume imbalances between on-screen dialogue and everything else (hint the dialogue isn't loud enough and the everything else, especially crashes and explosions, are wayyy too loud).
Today “loudness” is an aesthetic choice and good mixers and producers know how to craft a record that is both loud and of good sonic quality.
There is a place for both dynamic records (in the sense of classical or old jazz records) and contemporary loudness aesthetic.
Can inexperienced producers/mixers do a hack job trying to emulate the loud mixes of pros? Yes. The difference comes down to taste and ability to execute with minimal sonic tradeoffs.
Source: I have a long history producing, mixing, and mastering records and work among Grammy winners regularly. Very much in the dirt on contemporary records.
Also, you can train yourself for what to listen for, to a point.
Of course this does matter to some people and I say "have fun".
I had Tidal many years back, and from the Lossless v Regular I only ever noticed a difference when it came to breathy sounds/etc. I did see that Tidal would burn through like 50GB of data monthly though.
Also - you may want to test some more modern recordings, the microphone/mastering quality of things nowadays is far better than what it was 2 decades ago (despite what some audiophiles may claim)
In practice, on average playback equipment (by which I mean decent hifi) in an average listening environment most people can’t tell the difference.
But… I’ve also done blind testing with a top mastering engineer on studio speakers and he was able to identifying 48 vs 192 reliably.
Mastering quality was ruined by the battle for perceived loudness. So masters with decent degrees of dynamic range is definitely helpful.
I've heard things get close using regular CD audio with some umpteen-channel DSP effects, but nothing like that from two speakers and a straight playback with no effects processing.
I've also had a binaural headset demo get really really close. I imagine it could be better, but this was for some generic model, not anything that is tuned to your own personal ear shape etc.
because any of us from the late 90s/early 2000s who used the early versions of LAME will tell you in a second how easy it was to pick MP3 over raw, even at 320kb/s
Few audio things bug me more than the kind of tinkly pre-echo effects that were pervasive for a while.
On the other hand, the only sample in which I didn't hear ANY difference is Ennio Morricone's, to the point where I couldn't really tell it apart from its 56kbit/s version.
Can the hearing be selectively bad for some frequencies within the standard 20-20000 range, and normal for the others?
Once you hear the difference in sound quality / see difference in image quality you cannot undo it.
I have become very picky with display resolution and text clarity, and it has not served me well. I miss the days I was happy with a 1080p monitor.
Now if you ask me that monitor is causing eye damage and I rather not use the computer that day vs use it.
Additionally, a lot of audio pipelines (even beyond the DAC - like amplifiers and similar) can end up with artifacts and harmonics in more audible frequencies - this is often more notable at extremely high frequencies (like 96khz and similar) - there's honestly nothing any human can actually hear near that range - but that doesn't mean it doesn't then affect audible ranges when actually played back on real equipment.
The big point is that "Being Able To Tell The Difference" isn't always the same as "Better Quality". You're often just replacing one artifact of the playback pipeline with another. Neither may truely match the original performance.
[0] https://sound.stackexchange.com/questions/38109/lame-why-is-... - while not an explicit "low-pass" filter, the default option of "-Y" does something similar.