[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Are there things in computing or programming that just work and

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 26
Thread images: 1

File: 1381168478589.gif (875KB, 240x180px) Image search: [Google]
1381168478589.gif
875KB, 240x180px
Are there things in computing or programming that just work and we don't know how they do yet? Like any mysteries or something that can't be figured out?

I can't imagine there possibly could be, since we made all that stuff from scratch, but I won't know unless I ask I guess.
>>
>>8530413
Ya a lot of stuff is just guess work. Best example I can think of is in real time applications. You really don't know what's going on at any point with the data but just have statistics to go off. There's noise in computing that we can't explain or predict completely. We just know how to get around it and such at the expense of efficiency.
>>
>>8530413
https://www.damninteresting.com/on-the-origin-of-circuits/

This might interest you
>>
>>8530413

What you are describing is in fact actually happening now, with some regularity.

It goes like this: computer programmers program a computer or series of computers to do a bunch of different, very complicated and tedious/onerous things. Prove a theorem, trade on the market, etc. But a complex and rich behavior emerges.

Then the computers do something that is either a pleasant surprise, or unexpected. They cause the market to start crashing, they prove the theorem, or they accurately model the physical phenomena, and reproducibly so when the computer program is run again.

/And in each case, the programmers themselves cannot understand in specific detail exactly how or why it it is that the computers managed to do just exactly what they did/. Or, /the programmers themselves can't understand how it is that the computer is right/. But right it is.

This goes directly to your completely reasonable suspicion: humans programmed the thing, of /course/ they should be able to predict or explain its behavior! In principle, I would initially expect the same thing: you ought to be able to reverse-engineer the whole thing, given enough time. But I've heard quite the opposite enough times now to know (because I've been told enough times at least) that it's a real thing, an emergent property of computers. They are turning into "Black Boxes", for all practical purposes.

I am not a CS person and I tried to look up some concrete links on this, but I regret that I could not find any. I know that someone else on /sci/ knows what I'm referring to though, and can buttress the claims in this post.
>>
>>8530429
Machine learning has that property in parts, hasn't it.

The self-learned valuation function works, yet we don't yet know how it works.
>>
>>8530413
it depends on whether P=NP or not
>>
Artifitial neural networks and machine learning are computational behaviours that work and we don't know exactly why the final sistem works. Look it up, it's basically using evolution to solve dificult problemes that seem dificult to automatize because of their nature. This video explains really well how a neural network learns how to beat a mario level without any human intervention. https://www.youtube.com/watch?v=qv6UVOQ0F44
>>
neural nets/AI. One of the biggest problems is that you don't know how it's getting the answers it gives you. Funny enough, liberals are worried about candidate selecting AI being racist.

also, I personally still don't know what the where the fuck 0x5f3759df comes from.
>>
>>8530451

this is pretty cool

at this point if we reach any sort of advanced artificial intelligence it's probably going to have to be developed in a manner that leaves us uncertain how the underlying mechanisms function
>>
>>8530429
Wasn't there a mathematical proof generated by a computer that was so long and complicated that it was literally impossible for any human to confirm it? I remember reading about it a few years ago.
>>
>>8530557
boolean pythagorean triples problem?
four color theorem is the earliest controversial one i know.
>>
>>8530557

I /am/ a math guy and the other anon is correct that the four-color theorem is an early and well-known example of a mathematical proof/theorem which was carried out by computer, since it involved a tedium of many cases. The computer's version checked out, but this left the humans with a philosophical problem. A gold standard of transmitting mathematical truth is that human beings can communicate the idea to other human beings, which is a slow process. There is always a possibility, however remote, of some goof having been made somewhere - humans do this all the time, of course, and even a computer can have a machine error of some kind (as opposed to intervening human error) veeeeeery rarely, as I understand it. But it's not impossible.

And this slight possibility is one thing that provides philosophical grounds to reject computer proofs, oddly enough since as I've just said, humans are much more error-prone. But we flatter ourselves (legitimately, I think) that we have uniquely creative capacities to judge our own work after long reflection. The trick is to set aside the time for the long reflection.

There is a philosophical case to reject the "black box" in favor of only what we can deliberate and understand amongst ourselves, however limited our capacities in this wise may be.

We also now have proof-generating software which as I understand it is (of course) eclipsing humans yet again, and sometimes in the ways that I alluded to above.
>>
>>8530659
I wonder what will happen after we have created a strong general AI. It's would be black box that's too complex for humans to understand just like our own brains. If it can examine societal or political problems and generate a solution that's too complex for humans to understand, will we reject it? And what happens when this AI interprets this rejection of an obviously correct solution as yet another problem to be solved, and creates a solution for that as well? Rogue AI is always depicted as being evil and wrong, but what if it actually turns out to be right?
>>
>>8530413
An area of research that comes to mind is variability in HPC (high performance computing.)

It goes like this: The modern systems we build are increasingly complex. The hardware is more complex because people want more features, like more instructions, and wider vector units. We have more of this hardware in every node so that we can do more work. So we are build computers with more nodes and more cores per node, and we are putting different types of hardware on each node, such as GPUs, and FPGAs. This makes the performance of the hardware very hard to predict.

Besides the nodes themselves, the interconnects are getting more complex, with the current best (the dragonfly topology) actually having random aspects to it. You don't even know how a packet will get from node A to B with 100% certainty.

On top of that, the operating systems that run on these nodes are more complex, making their performance harder to predict as well. And on top of the OS, the software running on these computers is getting more complex as well, as people come up with techniques that sacrifice simplicity for scalability.

And finally, the compiler writers are coming up with more optimizations that make the machine code more complex, and may increase the runtime with some probability but decrease it in the average case.

If you add all of this up, we see that it is getting harder and harder to predict the performance of modern super computers. Even if you run the same job on the same computer, you may get wildly different runtimes, say 50% slower the second time. This is a tough price to pay when your simulation is supposed to take 24hrs+ to run. And it could be due to network interference, OS interference, processor variation or who knows what.

So to connect this to what you're asking, we don't really understand the performance of these giant systems. We have models but they don't always work.
>>
>>8530718
If you're interested, you'll want to search for "HPC variability" or something similar and look for papers by Torsten Hoefler and Kirk Cameron, for starters.

And papers on "lightweight kernels" often fall into this area.
>>
>>8530701
>Rogue AI is always depicted as being evil and wrong, but what if it actually turns out to be right?
>evil and wrong
>right

Ah yes, advanced artificial intelligence will be the first time the human race is ever confronted with horrible immoral decisions that have a basis in logic.
>>
>>8530413
I guess there can be a infinite number of design patters, but the ones we have are more than enough.

So technically, "programming" is endless.

There's also "quantum computers" and the memes applications they could have.
>>
>>8530429
why the fuck are you puttin /s everywhere, mongoloid?
>>
>>8530718
>>8530719
If they understand there is variability and why there is variability does this really count as something that is "poorly understood"?
>>
>>8531389

This denotes italics, and is commonly used throughout the site, since italics as-such cannot be implemented, last I checked.
>>
>>8530413
>Are there things in computing or programming that just work and we don't know how they do yet?

most people don't know how anything works, to be honest.
>>
>>8530452
> liberals are worried about candidate selecting AI being racist.
Garbage in, garbage out.

If you give a neural network a bunch of photos, and 95% of the photos tagged as "human" are of caucasians, don't be surprised when other ethnicities end up being misclassified more often.

The ease with which stereotypes can be used to instil racism in an AI suggests that it's a reasonable model of human cognition.
>>
>>8530452
>0x5f3759df
i got that reference without googling :)
>>
>>8532532
Then I suppose you would say that 'the runtime performance of jobs on large clusters is poorly understood'.

Its only well understood in the sense that we know all the parts of the computer. I listed pretty much every component of computers so its not like we've narrowed down the problem.

And I didn't even get to power, which is increasingly important, and hard to predict.
>>
>>8530452

Yes you do know. Statistics in, decisions out. If you get fucked results, review the data you feed it.
>>
>>8530413
Various forms of Shell sort and Comb sort
Thread posts: 26
Thread images: 1


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.