Old thread >>51467711
>What is IPFS?
It's basically BitTorrent on steroids.
>why would one use it
* Distributed, decentralized filesharing (for now - ipfs is merely the communication protocol, and additional applications can exist on top of it).
* You can have a mutable address (i.e. always points to the latest version of a site), or a static address (points to a specific file). Yes, you can host sites over IPFS.
* Peers are found fast for new downloads. You don't need to wait that much to start a download.
* You can watch your animu while it downloads, I watched few episodes that way and it didn't even buffer.
>how to upload a single file
$ ipfs add ./$file
Access it at localhost:8080/ipfs/$outputted-hash
>how to upload a dir
$ ipfs add -r ./$dir
Access it at localhost:8080/ipfs/$last-outputted-hash
>how to make the thing mutable
$ ipfs name publish ./$file-or-dir-hash
Access it at localhost:8080/ipns/$output-hash-aka-peerid (it's ipNs not ipFs)
To update, publish another hash and it will be available at the same IPNS address.
>gateways (how to access IPFS if you don't have it installed)
>most recent talk about it by the dev
>I2P and Tor support coming soon™. We need that thing anonymous so pls halp.
Daily reminder to pin files that you care about.
Index of sites and various files:
Pomf clone on top of ipfs: glop.me
Image sharing on top of ipfs: ipfs.pics
Sharing is caring.
Question: can/how would streaming (f.e. Internet radio) be done with IPFS? I get stuff like twitter or 4chan, and but streaming seems a bit "impossible"... Or am I just not understanding it?
When you download from ipfs, you receive chunks on-demand. You can already stream videos over ipfs without too many issues, so downloading works.
As for uploading, it seems multipart formats are supported. Not sure how well exactly, but you should definitely look into it. In theory, it can also be implemented with ipns when ipns' support for arbitrary signing key is implemented (i.e. with forward-references).
Find URL and download. You can get results from google by searching through an ipfs gateway, such as by using site:gateway.ipfs.io/ipfs (or ipns), or via our site: ipfs.io/ipns/QmaGks9KKzu2WykHQjJFJkcUAN4ZoF7ok9h2hXj1WQn47U/
which links to content we know about.
There isn't a ton of content on ipfs yet. There have been a few sites posted that have a bit of content setup. But as of now, there really isn't much up there. If you are looking for files and such, it's best to stick with normal torrents for now.
You can go to localhost:5001/webui and drag'n'drop the files. From the command line, you'd do ipfs add -r "the folder" and note the hash next to the folder (should be the last displayed hash). Note that this operation will copy every file to a local directory even if it's on disk, and allowing inplace sharing is a planned feature but not yet available.
Be careful about uploading that much stuff. Currently IPFS copies all the files you want to send. So it doubles your disk usage. Might wanna pick and choose what you want to upload.
To upload directories, like entire shows, useipfs add -p -r <dir>
To upload single files, useipfs add -p <file>
It will calculate the hash for you and print it out. Thats what you want to share.
It looks like 7 people have accessed it so far so it should be super fast if they all pinned it (doubt it) and once the initial seed is done.
It should take approximately an hour at current speeds if whatipfs stats bwis accurate.
It looks like Im actually seeding some other file out at pretty high speeds. So thats probably taking all my bandwidth.
How could you tell 7 people have accessed it?
Drat. Thanks anyways man.
I started looking though the different versions of this, I think it actually might be one of them.
This is the only thing I recognize.
if we're gonna do this, rather than having a bunch stupid generals, we should have a single one concerning:
and tangentially, tor, retroshare and the like, etc
Oh nice. This however, is giving me a bunch of errors. Im assuming the peer ID hashes are people that are seeding it, but what does it mean when it saysunrecognized event type: 6or when it sayserror: dial attempt failed: failed to dial <peer.ID aS15tE>
Is it fine to just ignore those errors and count the peers listed?
>Is it fine to just ignore those errors and count the peers listed?
Yes, that's what I'm doing.
I don't know what unrecognized events are but I think the unable to dial peer means that someone in the DHT routed you to a peer that is disconnected. This is just a guess though.
Are you guys still pinning my smug collection? How's the progress so far?
I'm still downloading and waiting for pin to finish.
N o i c e
Real quick, are these ipfs hashes in base64? base64 for URL?
Alright, thanks. There seemed to be too few special characters. Can you tell me how they are encoded, and why?
It's content-addressed, so when your client asks for a file, you need a way to have a way to address the content itself (and not the location where it happens to be stored). The address thus has to uniquely identify your file, but obviously be shorter than the file itself. Of course, you can't have both, but using cryptographic hash function hits the sweet spot where you get a short address length and can be more or less convinced that you'll never run into collisions, with two files sharing the same address. Since it's cryptographically secure, you protect yourself both from accidental collisions, and ones forged by an adversary.
Heh. My mistake for being unclear; I meant to ask how the hash is encoded. Googling around, I get the impression that it's Base58, which seems stupid and I'm not sure why they've chosen that.
Oh, okay, now I understand you. Base58 seems to be the standard encoding for other Merkle DAG based protocols, e.g. Bitcoin. Here is their reasoning: https://en.bitcoin.it/wiki/Base58Check_encoding// Why base-58 instead of standard base-64 encoding?
// - Don't want 0OIl characters that look the same in some fonts and
// could be used to create visually identical looking account numbers.
// - A string with non-alphanumeric characters is not as easily accepted as an account number.
// - E-mail usually won't line-break if there's no punctuation to break at.
// - Doubleclicking selects the whole number as one word if it's all alphanumeric.
Thanks. I guess I just don't value those reasons as much as I value powers of 2, so I'm unlikely to convince them to change anything.
Anyone know how I could setup a functional website using php? Not very many people have php installed on their computer.
Or perhaps there is another type of script I could use? I'm pretty new to web development.
Dynamic content isn't feasible outright. Either use plain http for that (i.e. every resource is on ipfs but the frontend is an http site, like with neocities and ipfs.pics), or consider building the site with smart contracts (eris or ethereum among several options) if you're interested in having a fully distributed and decentralized site.
I guess I should start uploading all the rare files I've been archiving over the years. Here are some:
This one disappeared with Megaupload:
Special TLC made for /f/, original was only uploaded to Pomf:
Dilbert 1-3. Creator removed the originals, and all that is left are shitty Youtube reuploads. I had to hunt around for a long while to find versions with as little generation loss as possible:
A cryptographically-ensured form of distributed computing. Basically instead of the server performing and verifying transactions, users are allowed to do so because cryptography ensures that the operations must be correct and no tempering can happen. Check the sites of the respective technologies to learn more.
Thats the same thing without the contract, no? All the website files are distributed and decentralized. any data that is probably included in ipfs links is also distributed and decentralized. Whats the difference?
That's true only with static sites. If you serve dynamic content, then you can't rebuild the site without a direct connection to your server to get the feedback necessary to perform the rebuild. With smart contracts, any user can initiate a rebuild because smart contracts ensure that the input is legitimate and that he output is correct, basically.
You don't want the http interface for that, you want the 8080 interface. Check out ipfs config --help and you should find a way to specify more places to listen on, but you shouldn't have a problem, over lan, in connecting to another machine on 8080. hostname.domain:8080, as per usual.
Ahhh. I think I see now. So users submitting content to a site would not work though. Because then it would have to send that content out to everyone. Correct?
Also, does a user have to have the contract software installed to use it?
Not him, but I have one always-on server and a couple of desktops and laptops, and would prefer to pin files on the server, so that the files are always accessible. Doing that through the webui would be better than having to SSH into the machine constantly.
you can change the bind address in the config file for this (>>51483933)
the webui is on 5001, so change that line instead/as well
don't port forward 5001/8080 externally though, only 4001 (even that appears to be optional, i've had ipfs.io get content from my machine even through a nat, shit's magic)
Do what I said in my reply here >>51483933 But also change the API line to have the 0.0.0.0 part. That just specifies what address its listening for. 0.0.0.0 is everything. Also make sure your ports are good through your firewall.
the one anonymizing function ipfs has is that by design there are no 'origin' servers
when something is added, you seed that thing as a node, if someone else downloads it (or adds the same file themselves), they also seed it as a node, and there is no way to tell who put the file on the network initially
the way things like tor and maybe gnunet (haven't really looked into gnunet) work is that they do 'onion routing', or they route traffic over various randomly chosen nodes so it's impossible to tell who is accessing what
ipfs doesn't do this by default because it's slow, wasteful, and makes it impossible to source data from close peers, basically, it defeats a couple advantages
the inventor of ipfs realizes that ipfs will only be adopted widely if it's faster than what we have, he's being practical
most people don't need to have their static traffic anonymized, those who want to can by layering it over a network like tor, that's what they were made for
You could imply from this, that sharing <popular movie> is similarly risky to using BitTorrent, because content protectors send fines and C&Ds to seeders and downloaders in general rather than the original uploader.
People keep mentioning using smart contracts with web development. Can anyone explain how that would actually help a website? It seems that you have to install the smart contract stuff to use the contract. No one wants to install software just to browse a site.
Yeah. There were some anons talking about it a while ago in the thread. Just look back 10 or so posts. Only thing you'd have to do different is port forward external ports as well.
Beware though, the webui has no authentication. If anyone finds your webui, they can upload whatever they'd like through your connection.
>How does it work?
Are you talking about why it's highlighted as a link in your browser or something? Because that's not a link that would work on the normal Internet (it's probably for some other Freenet-like web that I don't know about).
Oh, right. IP addresses can be given in decimal notation instead of dot-separated decimal. Kinda forgot about that. Example (you have to add a HTTPS exception): https://1347911747/
That's not deduplication at all.
In fact, that's the opposite of deduplication.
What you're really doing is mirroring the files across servers.
Deduplication is an effort to reduce duplicate files on a local server using things like symlinks to save disk space.
well, it depends how you look at it
lets say we both have .. an episode of a tv show, the exact same file
lets say we both make a torrent including the file and share it
in this case you'll end up with two seperate swarms sharing effectively the same data but in such a way that they are incompatible (you don't automatically benefit from the peers of both torrents, only the ones connected to the torrent you chose)
with ipfs, if we both add that episode to ipfs, it will get the same hash, regardless of if we named the file differently or if we add it with other files (say i added the episode by itself, and you added it along with the rest of the seasons' episodes)
people who go to get that episode will automatically be able to get it from either of us, and anyone else who has that file (even ones who already had it)
you can think of it as deduplicated /effort/
the cache is natually deduplicated locally as well, since you don't store the same block more than once
you do want several people with the same blocks, since that adds redundancy and performance
It's on the block level, not file level, so if two files owned by you coincide on some block, only one copy of it will be stored.
>reduce duplicate files
That's the same thing that the anon you're responding to is saying, so I'm not sure what your point is. Or do you think it's not deduplication if you're using something other than symlinking or something?
If I had for example a directory shared with two files in it, and later added a third one, would ipfs add -r <directory> notice the already added files and only update the directory listing by uploading the third one?
Files themselves do not have a human readable name outside of directories, so you'd have to wrap it inside a directory. One way to do this after you've already added the file is toipfs getit, give it a filename of your choice and thenipfs add -wit. This wraps it inside a directory, so that the filename is preserved. The file itself does not need to be added again, since it hasn't changed, so this will complete almost instantly. All this does is add a new object with an entry associating the human readable name and the hash of the file. After you're done, you can delete the file you downloaded with "get".
I should mention that you'll get a new hash for the directory, though, so you'd have to update that wherever applicable (web links, IPNS publications, DNS records).
I found a new way to add a filename:
1. Create a new directory object using:$ ipfs object new unixfs-dir
2. Add your hash to the newly created object:$ ipfs object patch QmUNLLsPACCz1vLxQVkX qqLX5R1X345qqfHbsf67hvA3Nn add-link "New filename" QmdSSEukuh311rvY7bVXK B7pt83EUmbG9wvnsa5MxTAww1
Now the hash QmP9BYqgQceZpHct2tPK ZoHboTUUpbatWxdyM5Ck61E4z3 will point to a directory where QmdSSEukuh311rvY7bVXKB 7pt83EUmbG9wvnsa5MxTAww1 is named "New filename".
That's what "hash" means, yeah. Hash and checksum are used interchangeably. Technically, were talking about the labels of nodes in the Merkle graph, so they are not always hashes of the files or blocks themselves, but sometimes of the hashes of the children's hashes, and so on. One file can be divided into several blocks, each of which is hashed to get an address for the block itself, and then the address of the file would be something like the hash of the concatenation of all these hashes.
>That one guy away from everyone else
Got it, will IPNS suffer the same problem you have with things like Bitcoin where as the network keeps growing over time the amount of info you need in every node becomes really high?
Is this by design?
I think a IPNS record is just a normal file saying "This IPNS name points to this IPFS path", but signed with your node's private key. The file is requested and transferred like any other file (and garbage collected by other nodes when needed). It's not distributed to every node at all.
i'm pretty sure ipfs has no ever-increasing component to it like bitcoin
ipfs has a fair amount of stuff "on" it already, all things considered, and my .ipfs folder is 138K (minus the block cache, which is only a cache of blocks i have requested myself)
Can you do that to an IPFS path? How would you declare the file name? add -w works when you are adding a "real" file, but in order to rename it afterwards you either have to get it or do some object mangling, from what I can tell.
i found it since
they have it as a low prio enhancement, but this is what i'm looking forward to the most right now
imagine the explosion of content once people can add shit without worrying about making a second copy of everything?
What do you mean? Symlinks are 1-1. You'd need a way to map each segment of the original file to a separate file in the cache. That's not possible on traditional filesystems without shared extents.
That is what most distributed file systems are.
Look at Freenet, files die on there when they don't get accessed over time and the whole network runs out of space to store it.
It is a pretty natural way of pruning old content if you start to run out of space.
i just want something along the lines of a sqlite database containing file paths, and their hashes (whole file, plus each block)
then update it like however rsync/mlocate does theirs
as for outdated entries (file has been moved/renamed/modified since), just detect them on read (failure to open the path, checksum mismatch), and either update the db to match, if possible, otherwise ignore/remove from the db
So you'd store the files in their original location and generate the IPFS data blocks on demand? That could work, but it looks like the devs are worried about performance, since you constantly have to rehash files and segments to check for inconsistencies.
Another option might be to flat out refuse to add any files without 'chattr +i', setting up an inotify thing to monitor for when this changes and issue a recheck when it does. Maybe it should consider the file broken until it has been reenabled and the file rechecked.
well bittorrent pulls it off
what i described is not terribly different to bittorrent, its metadata the ".torrent" files are a file/folder listing along with their block hashes
just do what they do, send what you think is right, and have the receiver verify what was sent (come to think of it, they have to do this anyway), perhaps also do a checksum while sending the data, so you're aware by the end if you need to remove/update the entry
Well, with IPFS you would need to recompute more than the block hashes, because of the Merkle DAG database. Every file containing a block identical to the one that changed will have a different hash, and every directory containing a changed file will have to change, and so on. Not saying it is impossible, but it's potentially a lot more work.
yea, that's true
though even just adding individual files like this would be fine for me
from what i've seen, updating a directory hash seems rather lightweight
i understand a partial file change would unavoidably require a whole-file re-hash, but for many kinds of files this doesn't happen (basically any media)
different use-cases, this doesn't intend to replace those
It's fully encrypted and is designed to be compatible with anonymizing solutions like i2p or tor. i2p cannot host decentralized content and belongs in the trash for that purpose.
why would i run ipfs over tor/i2p instead of just using tor or i2p?
what is the usecase for ipfs? i watched their video on their website and i still have no idea what it's for.
encryption is to stop people spying on what you're doing. when they can literally just connect to u and see every file ur seeding ur fucked.
Freenet has a hardcap on hosting capacity, ipfs does not. The connectivity model of freenet and ipfs are widely different. Freenet is meant to be a completely different, isolated construct whereas ipfs is a thin layer that is meant to be compatible with existing and future technology.
IPFS is for persistance and resilience against websites dying or changing without leaving a trace of the previous versions. It's a global, distributed bookmarking system, with the added bonus that uploading files is extremely easy compared to running a web server or creating a torrent.
It depends. Once you upload a file and someone views it, they automatically become a server of their own. Normally, they garbage collect the file and stop seeding after some time, but they can opt out of this. The "bookmarking" in the metaphor is what people call "pinning", which means that a node chooses to seed a file indefinitely.
So it's permanent as long as someone has the file pinned, which will not always happen.
As long as other people access it or pin it.
There's a project to offer compensation (like buttcoins) to people that voluntarily share their HDD space and bandwidth to others to guarantee their files will always be accessible. But this is still far away in the future since ipfs itself is still in alpha and will stay in development for quite a while.
>Which darknet protocols are built around merkle hash graphs (like torrents)?
Like I said, no compelling reasons. I understand you are excited but nobody will know what the fuck that even means.
>no mutable links
>not inherently compatible with other technology
>requires centralized trackers
The only meme here is you desu senpai.
That's relative to how many people are using it. If it becomes as widespread as the web I can guarantee you that broken links will become extremely rare compared to what's going on now. Nothing is permanent. Daily reminder that you will die one day.
That's not its only advantage anyway. Things like "offline" connectivity, proximity fetching and a lot of other stuff is also very important.
So can I use this as a replacement to torrents and not get raped for copyright.
Also can I leave my multiple TB hdds serving on ipfs this and expect to be safe? Or is it obvious what im doing, and can peers be malicious?
>So can I use this as a replacement to torrents
>and not get raped for copyright.[sic]
If you use it over i2p, yes. It is not anonymizing even though it's encrypted. It is also safer than torrent because you can't tell if you're downloading from someone who's hosting the content explicitly or if the content is merely cached, so more evidence must be obtained.
but isn't my actual computer external facing when I'm serving content? My IP is used to connect to me and get my files. Isn't it then trivial for a copyright enforcement agency to connect to me, log my IP, and send a notice?
that's the same as ipfs though, just like was said earlier. the files are automatically pruned after a date, and users can delete them manually too. it's no different except your internet cache lasts a lot longer by default.
You are a fucking retard.
I strip all trackers all magnet uri and they work fine.
So why are we using it?
"we're" all using it to send anime that's pretty obvious. I understand the benefits of the system for keeping files alive, but it seems without an immediate use if it can't be used to share files care free.
maybe it just needs like proxy servers that know what IP belongs to what hash or something. well I don't know, I'm not good at cryptography or networking.
You scared the anime police are going to come after you?
>maybe it just needs like proxy servers
As said above you can just use an anonymity layer. You can still join a swarm all the same.
it's literally no different though. all that's required to emulate this behaviour is to make your downloads and temporary internet folder never autodelete and to run a web facing server serving the content.
literally all ipfs is.
Your browser cahce is not accessible by anyone browsing the web. The point is that you immediately act like a server and help keep the content alive. When you "save" something you are discoverable by others who want to browse the site and hence become indistinguishable from the original host.
It doesn't matter if not all files are shared by everyone indefinitely. The web can't do anything remotely similar. The best you'll get is someone hosting a mirror, on another address that you'll have to discover via e.g. Google. Discovering mirrors is not part of HTTP.
bitswap doesn't use HTTP. Nice comment though, you know what you're talking about.
So if I go to
and he has removed it, will someone else who has that file serve it to me on that url(or behind the scenes with the software searching for the has)? if so you might have convinced me of the benefits over conventional http and webserver models.
Not sure what video you're talking about, but there are several ways to do it. One is to run ipfs directly and either write the content to a file or cating it. Another is to use a web gateway on localhost which runs the IPFS protocol on the backend, like you said.
I said "yes" as in "you're almost correct". What the other's are raging about is that the address doesn't contain an IP or anything similar in the first place. The peer serving the file is completely secondary and transparent. What matters is the content.
the web server it ships with is just a front end, a 'bridge', to allow ipfs access with a web browser (since current browsers yet support ipfs directly)
try it yourself, download ipfs, run the daemon, then open this;
the video will be located and downloaded over ipfs, addressed/identified by the hash you used, and served to your browser over http (ONLY between the ipfs daemon you're running and your browser, http is NOT used over the internet here)
Caching, content-addressing, everything is basically a single massive torrent (in gaymen's terms), always encrypted end-to-end (optional with bittorrent), mutable links, can be used naturally with other technologies, etc.
You can have six different torrents with six different swarms even if the file in the torrent is exactly the same. It's extremely inefficient.
Also mutability, signing, dag trees and a lot of other `ABSTRACT BULLSHITE THAT YOU WILL NEVER COMPREHEND
Not just you. The webui isn't very polished because the software is still in alpha and has more important issues to deal with for the time being. At least it doesn't segfault like gnunet's :^)
Someone mentioned a workaround: wget the file through the http://localhost:8080, wait until it is done, and ipfs add it. Adding will be instant, since the file downloaded via IPFS behind the scenes.
oh i see thanks for clearing that up.
ok thanks i'll read the wikipedia pages on some of these terms.
for the hashes of files, this works great for indentical files yes? but torrent websites for instance may have a thousand torrents for a single movie, all different type/size etc. will there be a way to like determine similar files(not sure if hashes can be used this way)? or will indexing websites solve that issue?
Can you give me a brief overview on how the versioning works? it sounds quite cool.
>will there be a way to like determine similar files
Large files are split into small blocks, each with its own hash. If two files have the same block, they'll both seed the same content even if it doesn't come from the same file. However, for non-text content, it's very rare for it to happen. Obviously if you're downloading a 4K BD-rip you don't want to receive a 480p camrip block instead so you will always have different set of blocks for different such files. But if you encode the same file with the same settings, assuming the process is deterministic, then the file will be the same anyway.
Short answer is content-addressability (again): Since addresses are uniquely determined by the file contents, making an edit to the file changes the address. The content on the old address is still accessible (barring garbage collection and not enough people pinning it). The block level dedup also helps a bit, since the whole file won't have to be redownloaded and repinned, only the block that changed.
>for the hashes of files, this works great for indentical files yes?
yes, identical files can be downloaded from anyone who has it, regardless of filename
>will there be a way to like determine similar files
not by the hash, there is no indication as to how "similar" a file is except by comparing how many block hashes are the same between two files (large files are chunked into multiple smaller blocks, for various reasons, both the whole file and each block have their own hashes)
>Can you give me a brief overview on how the versioning works? it sounds quite cool.
a hash 'defines' a unique file, so if the file is updated, it naturally gets a new hash, so now you have two hashes, one that links to the old version of the file, and another for the new version
how would you see the version history. and what decides whether it's a modification or a new file?
is the original uploader the only one who can create new versions?
i know they said gitlike, so is the original uploader like the maintainer of his file? and can accept changes or something but ultimately decides versions?
No matter how you look at it, an update and a new file are the same thing. With IPNS names, the original creator provides a permanent name to access a resource. To update where that permanent link points, one uses ipfs name publish <hash to publish> <key to publish to>. For now, <key to publish to> isn't supported and will be your node's key by default, though.
Whether a file is replaced or "updated", the entire hash changes because that's how hashes work.
It's gitlike in that sense, since only the original uploader can decide what's the canonical next version. However, any version can be pinned by anyone. If someday someone decides the new version of the content is shit, they don't need to pin the new version of the content on behalf of the uploader, they can also assign a new dynamic name to the version they prefer as if it were a git fork.
>how would you see the version history
You can't at the moment, but I remember reading something about plans to add such a feature a while ago.
>is the original uploader the only one who can create new versions?
>is the original uploader like the maintainer of his file
To add to what >>51490429 said: Anyone can release new versions or update it however they want. But all users can create a signature for any file, but the signature is unique to your node. You would then give out this signed address to anyone who would be interested in your newest version of the file. They can then be certain that the file originated with you (or that you vouch for it in some way, at least). These signature files also happen to me mutable, so you could update them to point to another file (e.g. a later version) and your peers would see your new version instantly.
is it possible to grab a list of peers from the DHT then just search through all their files via filename?
if a search engine were to be made is this what it would do? but i mean it'd cache all known results into a local db to be much faster. but you get the idea.