Now that I've developed application for compressing files, what is my next step? It can compress enormous files like 100 GB down to 2 KB and there's now limitations depending on the extension of the files.
>>675772117 It seems like you don't believe me. It uses complicated algorithms, it works with files on binary level and the algorithms are processing those information. How you may ask? Well, as I said, can't say that.
>>675770961 Here's my pseudocode for compressing 100GB to 11 bytes:
> if the input is 100GB of 0s: > output "100GB of 0s"
Unfortunately, it only works on files that are all 0s. If you have something that is reversable and works on any arbitrary file, congratulations, you've broken math and there's no reason consequence should follow premise anymore.
It is not possible to compress 100 GB to 2 kb, even for text, which is the most highly compressible thing known to man. Even a 100 GB Text File would compress to Hundreds of Megabytes AT LEAST. 4 MB is a song. 100 GB is 10 full copies of Skyrim. 2 Kb is not even 1 second of audio at a horrible bitrate. You are not going to compress 10 copies of Skyrim into less than one second of audio, it can't be done.
>>675772385 He probably does not believe you because you are a liar.
What you claim is not possible.
Well you could divide the file by itself and the result would be 1. But then your program would have to carry the information in some type of data file that would be the same size as the original file.
>>675770961 bullshit. you cannot do that, or use lossy compression, which will probably lose 99.999999999999999%of the data you're compressing. So >100 GB down to 2 KB is bullshit, few lines in notepad is more than 2 KB and you are trying to compress 1,66 copies of GTA 5. Does not make sense, but it is really big.
eh I could see it being possible, but you would need to have a giant pre-existing program installed so it could lookup what the compressed format is expressing.
Here's my idea, install some giant ass library of data pieces by ID
e.g. 1= 010101010101... 2 = 0110110110... etc...
But then the ID's would start to get longer with the different possibilities and make your compressed file > 2kb
You could make the chunks really big, so there would be less ID's but then you'd be limiting yourself to what you could "compress". You could make some kind of converter to use before the "compressing" but at this point I think it's kind of just a pain in the ass.
>>675777833 The problem with this is that you need the same amount of data to store the ID as the chunk takes up. if your chunk is 2 bit, you have possible values of 00, 01, 10, 11. That means you need IDs 1-4. Now your ID's have to be 2 bit numbers, and you've used the same amount of data to store the ID as the data in the first place.
Please support this website by donating Bitcoins to 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5 If a post contains copyrighted or illegal content, please click on that post's [Report] button and fill out a post removal request
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site. This means that 4Archive shows an archive of their content. If you need information for a Poster - contact them.