So I know most of you probably depend on lossy codecs to fit most of your library on your mobile devices, which one do you prefer? Me, being autistic, I prefer l Ogg Vorbis (libVorbis 1.3.5) at q10 (500kbps).
AAC 320Kbps does it beautifully for me. Definitely higher quality than 320K MP3 imo.
Slightly unrelated: HE-AAC (v2) is truly amazing. It gives the equivalent to around 192k-256k MP3 at only 64kbps. Insane quality for the bitrate.
Nero was far and away the best a few years ago when I gave a shit and kept up with hydrogenaudio stuff. Probably doesn't matter what you use anymore as long as it's not the garbage FAAC.
Just a follow up about the HE-AAC thing in case anyone was interested. I made some examples here: http://pastebin.com/raw/jpiZHmJi
imo, HE-AAC is very impressive for the quality it squashes in such a low bitrate. for whatever reason, HE-AAC only supports a maximum bitrate of 64kbps. As far as i know, it's used in DAB Digital Radio because of the quality it carries vs the bitrate.
HE-AAC definitely is not a replacement for MP3/AAC 320k, from what i can tell, 64k HE is about equivalent to 192k AAC (about 256k MP3). When comparing to the 320k AAC version, you can definitely hear a loss in quality (definitely losing some highs) but is still very impressive given the size, especially when compared to 192k AAC.
the 64k AAC/MP3 are there just for fun, but it really gives some contrast to how much is being squeezed in. It's kinda a shame it's not used more imo, it seems to only be used in digital radio and internet radio streams. It's not the most available codec either - to encode it you either need Nero for Windows/Linux or on Mac, you can use QuickTime.
I see, i think Apple's implementation of AAC is pretty good. I do hear some awful things about FAAC but never actually tried it so i don't know.
If i'm not mistaken, the Nero encoders aren't available on Mac anyway, and the Apple encoders are only available in QuickTime for Windows (which is basically outdated now, and you probably need pro to do encoding with it)
>and the Apple encoders are only available in QuickTime for Windows (which is basically outdated now, and you probably need pro to do encoding with it)
neat. it still technically uses the QT Components
>qaac requires Apple Application Support that is included in iTunes or QuickTime
but that's not a problem. As it appears to support iTunes, at least they'll be up to date.
I'd still be interested to see if anyone has some nicely presented comparisons between Apple AAC and Nero AAC (or any other contenders)
>>I'd still be interested to see if anyone has some nicely presented comparisons between Apple AAC and Nero AAC (or any other contenders)
Nice, seems comparable. I think the AAC gives slightly more detail to the highs but other than that it's pretty similar at the same bitrates
I've never really used Opus or Vorbis because they generally seem less supported.
In general, you can nearly guarantee that AAC can be played by anything made in the last 10 years. My old Sony Ericsson K800i could even play AAC files.
yeah. weirdly enough, HE-AAC often refers to "v2", though HE-AAC-v2 is backwards compatible with HE-AAC v1, it will just play back with lower quality.
I think the v2 does some magic with stereo and uses SBR, so if the decoder only supports v1, you can still play it, but i think it plays in mono and has noticeably lower quality.
Again, HE-AAC isn't something i'd store files in unless i really needed to, but if want/need to send a small audio file, it would do the job nicely.
Another note is that it really doesn't take long to encode. On my 2.26GHz Core 2 Duo, it encodes a 5 minute song from FLAC in about 5 seconds
I like standard, standard is immortal, standard is what everyone implements, standard is what will give the fewest problems.
Fuck the extra 0.2mb of compression I can cram by using opus or ogg or whatever, I'll keep my 320kbps and if anyone complains I'll remind them the size difference for uncompressed audio.
i would agree, but im pretty sure that 320k AAC sounds better than 320k MP3. in my head anyway.
It's the same size and supported by practically everything, so there's no real difference in usability. The only annoying thing is having to find flac and encoding it myself because almost nobody publishes AAC at 320k (iTunes does AAC 256 which is about MP3 320k to me)
>standard is what everyone implements, standard is what will give the fewest problems.
I don't own or use anything that doesn't support Vorbis. It's 2016. If you own hardware or use software that doesn't support Vorbis, or at least AAC, you're using the wrong things. AAC is arguably the "standard" these days anyway, as much as there is a standard music format. AAC is also literally the successor to MP3.
Crippled... How? It's the algorithm that's taught in universities, unlike aac it's de facto open (even if only ogg is de jure open).
No one is saying it's the best, just that it's what you can expect everyone to use.
>i would agree, but im pretty sure that 320k AAC sounds better than 320k MP3. in my head anyway
... So long as you can admit it's a placebo? AAC sounds better at lower bitrates, so you can get away with more compression. At higher bitrates its "transparent", meaning there's no human psychoacoustic reasoning for it to be objectively better. It's a pure placebo unless you're a bat and good sir bats are not allowed to internet!
>It's the same size and supported by practically everything, so there's no real difference in usability.
Only Apple devices and a few other things implement AAC, which is the point behind mp3. Everything implements mp3, every operating system and every Chinese designed player.
When you're dealing with file formats, standards really should rule.
>It's the algorithm that's taught in universities
They also teach Visual Basic in universities, guess it's standard now.
>Only Apple devices and a few other things implement AAC
Wake up m8, it's not 2000
I'm pretty sure MP3 isn't de facto open. Just because it's thoroughly understood and propogated doesn't make it open. See H.264 (and x264).
AAC performs much better at the same/lower bitrates as MP3. Most noticeably preserves bass and highs better than MP3 does, making for a more satisfying sound to say the least.
I'm not admitting 320k AAC is placebo. I did some ABX with myself between 320k MP3 and 320k AAC and found AAC to be better (though marginally). In most listening cases, i probably wouldn't notice, it's only when listening at high volume and carefully that i can actually spot the difference.
If i'm just listening as background noise to work etc, that doesn't matter. If i'm actually listening because i've got nothing to do (eg: on the bus), then it annoys me to hear compression artifacts.
That's why i said "in my head anyway". The difference is marginal, but it exists
Same here except q6.0 on portables.
well mp3 and every other lossy codec is still encoded from a lossless source.
>so isn't lossless the standard
Still not the point, people buy shit from China which is typically going to get shipped and sold the first moment it arguably does what it's meant to, they're not going to implement more than one or two codecs and guess what, it's always going to be mp3 first.
Take the your shit to the loo, you're completely missing my point.
Opus is objectively the best at every bitrate.
Makes me sad it's not being taken up faster.
We must restore our zeal for technological advancement.
I don't think you realise AAC's propagation. It's officially MP3's successor, and was released in 1997. Back then, most computers didn't even play MP3 without installing codecs/software.
Pretty much anything will play AAC these days. My K800i (predates the first iPhone) played AAC. My PS3 can play AAC. Most car stereos that support "data discs" (like MP3 CDs) or USBs will play AAC.
If people called it MP4 instead of AAC, you'd probably think differently about it. AAC is just the audio codec in the MPEG-4 standard.
I feel the same way.
FLAC = My music sounds so much better because it's 10x the file size and 1 album can have its own HDD.
MP3 = I love my music and 320kbps is basically the best a human ear can perceive while still being a reasonably sized file.
WAV = I rip my CD's in the format they came in. Nothing too fancy. The files are somewhat bigger than MP3.
MP3, Vorbis and Nero AAC-HE+ are obsolete, Opus is way much better at 80k and lower bitrates.
Nero AAC-LC is still the best for higher bitrates because it supports higher frequencies, Opus doesn't and the others suck.
I don't see what's the point of using vorbis or opus if you're encoding your music over 256kbps. The biggest draws of those lossy formats to me is the transparent sound quality they have at 100-160kbps.
Placebo. Besides, if both files are encoded at 500 kbit/s there're literally no size difference. Opus was made for low bit rates in the first place, it cuts the higher frequencies to retain detail in the lower ones. If you want to keep the higher frequencies intact, you need a codec that was made for it, like AAC-LC (recommended) or AC3.
No, even Chinese garbage players like the RockChip based ones support Vorbis and AAC. You have to support AAC in order to play shit that people bought from the iTunes store. I actually own a RockChip based player and it supports Vorbis. You shouldn't own any player that is garbage enough to not support Vorbis or AAC in 2016. There's literally no excuse for owning one and therefore no excuse for encoding music to MP3 any more.
You don't listen to music with your eyes.
That said, FDK AAC is absolutely worse than Apple's encoder. FDK AAC exists because Google needed to buy an AAC encoder for Android, for things like recording sound in .mp4 files from the camera. Because it would be freely available as a result, Fraunhofer didn't want to put its best tech in, so it's kind of a compromise.
>duh genius, but you need to look at the math to understand how a codec works and for what purpose it was made.
Very few people in this world know enough about how codecs work to look at something like a spectrogram and be able to tell fuck all about it.
Plenty of people *think* they can though.
>Because it would be freely available as a result, Fraunhofer didn't want to put its best tech in, so it's kind of a compromise.
meh... well ty for the infos anon, good to know it's another codec to trash.
HE-AAC is nice and designed for low bitrate operation but at the same low bitrates Opus is vastly superior.
I use Opus now after LAME V0 for so many years, the files sound better (I can A/B Opus 128 Kbps vs LAME 128 Kbps at about 90%) and I'm happy with it. Spent a week last year redoing my 2200+ CD collection from the FLAC backups to Opus 128 Kbps and the collection takes up like 1/3 the amount of space and fits nicely on a microSD card with room to spare. Hand tagged (verified each fucking one for any potential errors), with cover art embedded in the files directly so I don't have the fucking photo gallery spammed with 8,000+ cover art images, and all hard leveled to 92 dB with ReplayGain +3dB settings so I don't have to fuck with the volume EVER.
Works for me.
Opus rocks, seriously.
I see that shit all the time from "sound experts" and yet all the people that participate in such listening tests at HydrogenAudio (where the people that actually create the encoders hang out and discuss them) all favor Opus at this point in time for pure sound quality vs all the other encoders. No it's not as widely supported but if you're not using a piece of shit media player it's a non-issue. Hell, even Android 6 can now play Opus (finally) with the Google Play Music app without issues.
Opus is the lossy equivalent of a new Sheriff in town, deal with it.
Figure that the day i finally switch player it's the day that Poweramp 3 finally comes to life. Oh well desu gonemad looks better and I played only 1 dollar for it
GoneMAD is superior in every respect so, good job, Anon. The developer responds fast to requests and additions, runs a small forum for it too:
and a blog:
Paid full price for it nearly 2 years ago and never been disapppointed with it, was the first Android player to support Opus as well (beat that shitty Neutron Player by months).
How do people even notice the difference between 320, 256, 224 and 192? Of course if somebody converts from a higher bitrate to a lower it will have noticeable artifacts, but I created 320kbit and 192kbit mp3 through a music project file and there was literally no difference.
I believe source file -> 320 will have the same sound quality as source file -> 192.
Btw, no bait.
>How do people even notice the difference between 320, 256, 224 and 192?
The no bait honest answer is simple: everyone has different hearing, seriously. Some folks, especially younger people, have hearing that easily allows them to hear into the higher frequencies/registers and they'll take note of distortion or missing content compared to the original.
Realize that lossy compression is just that - it's lossy meaning the psychoacoustic modeling algorithms (fancy big word for "throw out what the human being won't consciously notice during playback") literally scan the audio during the encoding process, find stuff that you won't "hear" consciously like a particular note of a guitar that gets buried under drums and other instruments in a particular moment in time and removes it from the final encoded product. The reason you get smaller files is because upwards of 90% of the original audio content is tossed out of the final encode. So many people don't understand this with lossy compression and it's part of why some people can hear the difference and some can't.
Lossless compression, on the other hand, is an exact duplicate of the original - no content is lost at all so it sounds exactly the same as expected. Lossless compression doesn't damage the content at all and doesn't get rid of anything in the encoding process so when it's decoded everything is still there.
With lossy compression you end up with a file that's up to 10x smaller (or even smaller in some situations) because huge portions of the original content is lost in the process of encoding and again some folks with exceptional hearing can detect when such content is missing. If you're older and your hearing isn't quite as good as it might have been or you've had some damage to your ears from attending too many loud concerts or events - or even worse, you listen to shitty music with too much bass and treble and no balance at excessive volume levels - you're screwed.
This makes sense.
>from attending too many loud concerts or events - or even worse, you listen to shitty music with too much bass and treble
The former is obvious, but is bass really that damaging? I limit my volume output always to 20% on every device so I don't damage my ears, but is bass enough to ruin your ears?
Yep, it is. Well, in all honesty anything in the 20 Hz to 20,000 Hz frequency range can be damaging if it's too loud for extended periods of time. Our hearing is a physical thing: the tiny hairs in the inner ear and on the eardrum itself pick up the vibrations from the air as sound but over time they just don't "work" as well as we age, and loud excessive volume damages them. It's a given that our high frequency response tapers off with age but the loud booming bass shit so prevalent in today's pop and rap music (as well as the idiots blaring that shit from their crap-ass audio systems in their vehicles) damage our hearing constantly.
In today's world most everything damages our hearing. Imagine life in the past, like 150+ years ago: the loudest sound in the entire world was probably either a dynamite explosion or a gunshot, both of which could be heard with relative infrequency. Nowadays, think about how often you hear very loud stuff even in spite of trying to be protective of your hearing: sirens from emergency vehicles, the sound of such vehicles in operation just from their engines, jets flying overhead, machinery, everything is damaging us constantly in some auditory manner.
I'm very protective of my hearing also: only attended 3 concerts in my entire life (and wore earplugs which still didn't really help all that much), I don't blast music into my ears when I'm listening, I don't use EQ (personal preference, always has been, I listen to stuff as mastered from source), low volumes rarely over maybe 70% depending, and so on.
As I'm getting older my high end responsiveness isn't what it used to be so Opus 128 Kbps encodes sound just fucking fine to me so that's what my entire 2200+ CD collection now exists in. Opus is pretty amazing - I don't suspect we'll see anything come along that'll improve it any more. The guy that created Opus also helped create Ogg Vorbis (Monty Montgomery) and the guy knows his shit like not many others
I regret buying GMMP because like 4 days after buying it i discovered Neutron, which is the only player with a parametric EQ.
That blows the shit out of any of the other shitty graphical EQs.
Opus is designed for low bitrates, so don't ditch your higher quality encodes for it.
Like HE-AAC further up the thread - it's incredibly efficient for the bitrate but has a limit.
HE-AAC has a maximum bitrate of 64k which sounds roughly equivalent to 192k AAC. If you need higher quality, you're gonna need to jump to 256k+.
Same with Opus - if you need to send small files or cram a lot of music on a small amount of storage and dont mind sacrificing a little quality, use Opus. But for your "main" collection, you should use another higher quality codec like vorbis/aac unless you really want to hold onto FLACs
I know that I am just an apple-pleb with an old iPod touch and iOS 7.0.4 Beta, but does anyone know if there are any iOS Apps similar to GoneMAD that can play Opus? I scoured the internet for what seems like hours and have turned up no result...
At higher quality to the highest quality (500 kbps) I've found Vorbis outperforms Opus, as if Opus was missing something in the higher frequencies and it didn't sound as "lively".
>HE-AAC has a maximum bitrate of 64k which sounds roughly equivalent to 192k AAC
Now that made me laugh, pretty hard actually. HE-AAC is good, yes, but no, it doesn't match the actual audio quality of AAC at 192 Kbps, not even close. More like 96 to 128 Kbps depending on the source material, sure, but 192 Kbps? Nope.
Opus is still superior, really, even at 64 Kbps which is basically where it starts providing "CD quality" encodes.
This new xHE-AAC could be promising however, some discussion about it at:
Listening to a 24 Kbps stream at the moment using the xHE-AAC demo app on the Play Store, sounds ok to me for casual listening but obviously it's not "CD quality' or even close to it. But for 24 Kbps and stereo, it ain't too bad.
It's a shame there were some coding techniques and features they couldn't figure out until a bit after they had already put it through the standardization process. Now it can't be changed without breaking the standard as it's defined, and thus, hardware support.
Some shit about mapping values onto a sphere and using that to code residuals. I don't know anything about opus or audio encoding. The technique made it into daala though.
>take phone out of pocket
>open google music
>type in song or band i want (all access, bitches)
>listen to song
>song sounds good
>codec dun fucking matter
>Settling for your perception of "good enough" at any given moment
>Not caring to make sure you eventually experience the best
>Thinking you're superior for this
People like you are the worst. Have some vision. Have some standards. Have some awareness.
FLAC for muh future lossy encoding and CD rips
WMA 2 set at "quality 75" for my Philips mp3 players (gogear spark)
Opus 144kbps for listening on muh PC and on my smartphone (AIMP is the player I use on android)
There were, but that's always true - you can keep on iterating on a codec forever and keep making it a bit better. It's important to actually stop and release it though so that people can use it.
There are some improvements you can make without changing the bitstream format, though. I know really low music bitrates (<48kbps) are being worked on right now for a future encoder release.
Much of what he's saying here makes little to no sense. Specifically, the part about quantization and sampling. What he's calling "lossless" is completely meaningless. Of course it losslessly captures the input to whatever degree the machine doing so is able. It's lossless relative to what, itself? How an ear would have heard it? He doesn't say, because it isn't lossless and the term doesn't belong there.
Likewise if you sample too few times per interval, you can miss fine momentary details. They're gone. You can't possibly reconstruct what you aren't aware exists, and those small bumps might add up.
I'd email him for clarification, but I know it would devolve into a philosophical argument about the machinery driving the universe itself. It isn't lossless if it doesn't capture absolutely every single interaction that occurred and would have affected what an ear would have perceived. Any other definition is garbage and I don't like it.