[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Armpit Fetish Cracked?

This is a red board which means that it's strictly for adults (Not Safe For Work content only). If you see any illegal content, please report it.

Thread replies: 94
Thread images: 2

Do people like armpits because they sweat a lot while they masturbate, smell their b.o., and associate that with sex?

am i smart?

anyways, see you guys on page 10/archive
>>
>>4573560
You might be right, but I think it is because of the hormones you eject from your body.
>>
I saw armpit sex on an hentai once, and thought it was disgusting.
i love it now.
>>
>>4573560
You've taken some of the most overrated ideas in early 20th century psychology and crushed them together. You're not a genius. Freud and Pavlov just came on a keyboard and you pressed enter.
>>
>>4573560
Yes, sometimes. Often but not the majority of the time.

Almost all apparent fetishes have multiple disparate sexual motivators culminating in similar stimuli causing arousal. A lot of times multiple motivators occur in the same person initially, either because the situation itself occurring sometime in formative mental state bound it all together, or just because human sexuality is complicated. Placed into the giant pool of the internet, people's fetishes can be molded into whatever the predominant interpretation of the communities they join to get fetish content are. It's actually kind of a tragedy because it can create very conflicted people who misunderstand themselves, but that's the way it goes.


t armchair sexologist

P.S. Yes you're smart, what >>4574160 fails to realize is that leveraging existing human knowledge to arrive at decent postulations is what 90% of being intelligent is. The other 10% is just not being a stupid cocksucker desu.
>>
>>4574177
I wasn't saying it was wrong because it was derivative. I was saying it was wrong because he cobbled two barely correct concepts into something that didn't make more sense than its components. More importantly, he failed on the last 10% of your test desu.
>>
>>4574180
I dunno, I'm not seeing a lot of ARGUMENTS, are you sure you're not just angry because you have armpit fetish and feel he misunderstood your unique, personal case anon?
Despite him making a general board-wide query?

Lie couch, tell broblems
>>
>>4574185
I'm not into armpits, though I haven't considered it. My issue with the idea mainly boils down to the fact that the pubic area is as sebaceous as the armpit. If you aren't seeing a lot of arguments, it's because there isn't much logic or proof in OP, so no logic or proof can really be applied too accurately.
>>
>>4573560
For me it's only because the girl wearing detached sleeves, thus showing her armpits and increasing my libido.
If they don't wear detached sleeves then I'm not really aroused
>>
>>4574198
Did Pavlov steal your girlfriend? Did Freud fuck your mom? Huh? Huh?

>>4574201
Sounds like your journey has been an arduous one, I'm sorry for your extremely specific fetish's relative lack of content.
>>
>>4574203
I think Pavlov's ideas were too broadly applied in the wider community of the time and they've just been a hold over since them. Freud was fucking insane and his ideas should be disregarded where they refer to actual diagnoses. He was cool for starting an interest in case studies, but all his were horse shit.
>>
>>4574206
Maybe Pavlov's DICK was too broadly applies in the wider community of UR MUM and Freud fucked your car!

Yeah Freud was fucking insane though, he is indeed the forefather of modern psychology, and it shows.
>>
>>4574210
Let's get to your original post, where you explicitly explain why OP is wrong and then call him smart. Did you defend him to be contrarian?
>>
>>4574212
I didn't say OP was wrong, I said OP was right for some percentage of the population displaying this front-facing stimuli response generally called a singular fetish. That's actually breddy gud, I think. The average tard just repeats the first joke or shitty pop culture explanation they hear for every fetish they ever encounter. Or "haha just like (celebrity that said or did something tangentially related to this fetish, allegedly)". Basically, I'm encouraging OP to keep thinking along these lines, saying that I think it's really a web of motivators.

OP if you read this, think about motivators for things, whenever you see a new fetish try to write down all of the things that COULD appeal to someone. Any type of person, from the sadistic and angry to the meek and submissive to the wildly emotional to the careful and thoughtful. Think about why different types of people with different experiences would come to have apparently similar sexual tastes, and the different emotional roots they might each have some mix of that forms the fetish.
>>
>>4574217
How about you, lad? Perhaps you were discouraged from thinking when your thoughts were wrong? I'll guess here that you're an eldest child. See how predictive I am.

Lie couch, tell broblems
>>
>>4574220
Middle child.
I was actually always encouraged to think critically and think things through for myself, and disagree so long as I had reasons why.

and u?
were u fone?
young child, parent are stoner and alcohol and give you thing bad for you? you grow up, want someone tell you truth?
lie couch tell benis
>>
>>4574230
At least you had a younger sibling, right? I'm a middle child too. I've no idea what fone means and neither does google. My parents were just about normal. I explored my thoughts alone, because I didn't feel like sharing them. I've never wanted anyone to tell me the truth, only present it. Notably, I'm still in high school.

Why are you so paternal of OP's foray into pedaling to the masses what he thinks he can?
>>
>>4574235
Do you not go on 4chan much?
>I'm still in high school
That would explain it. It's a meme, you'll know it after you lurk moar.

My younger sister is a crazed SJW (fairly recent convert) and a stereotype among stereotypes, but was very manipulative even as a child, smart but immoral. My mother discarded my warnings as jealousy. The irony is envy is the one among the deadly sins I'm probably entirely innocent of. I think sometimes that if I could have only explained better when I was younger, maybe I could have gotten my mother to understand and she would have adjusted course enough so my sister could have a happy life and not make herself and everyone around her miserable. But I was just a little kid, so I guess I can't blame myself for not knowing enough to explain well.

OP wasn't peddling anything, he didn't come in with a theory, he asked a question. Not even really seeking validation, just a question, curious about a fetish. /d/ isn't exactly the place for a thread like this according to board rules, but it's a good place to ask because you can talk about and disparage each others' fetishes and not care who gets triggered, so you can actually find things out that way.

I'm encouraging OP to continue in this vein because I don't see a lot of anons, even on imageboards, making genuine attempts at understanding aberrant sexuality. It's always theory this, stereotype that. I'm not going to bullshit on about how harmful and problematic assumptions are and how awful and intolerant, blah blah blah, I just think sexology has gotten off in the wrong direction in general and so I want to encourage more people to do what I do...read, consider, evaluate, try to reach back into the roots of things. I'm sure plenty of people actually do this, most of them are just usually lurking...like I usually am. I took the opportunity to disagree with you so that you'd respond if OP didn't. Picking fights is the best way to start non-formulaic conversations on 4chan.
>>
>>4574250
I suppose this is as far from formulaic as I've seen. Let me let you in on a bit of a secret. If I'm your sister, and I suspect I am(sans any but centrist politics), she wasn't sitting passively. Don't worry, there's no way for you to have made those allegations strongly enough to sway your mother. I openly manipulate everyone around me in front of everyone else. People seriously want to believe in a universal respect of emotions. On a slight veer, what are your sins? I'm a prideful sloth.
>>
>>4574257
>I suppose this is as far from formulaic as I've seen
Isn't it refreshing? This used to be what /b/ was like, back in 2007-2010, just with a lot of spam and like five responses of people calling anyone who made a serious post a faggot. It was great.

You're not her, she doesn't write like you, it would be an enormous coincidence, and I think you're making an attempt to release me from demons you think torment me, but don't worry, I only mentioned it because you asked about a younger sibling so I wanted to reward you with something for a correct guess. I have more pressing things to be haunted by.

She probably has more mental potential than I do, my parents had struck a very refined balance by the time she was raised and our family was very secure, she had me and our other older sibling to challenge, encourage, and reassure her, but ultimately the choices of an individual play the most formative role in their own development starting from a surprisingly young age. She wouldn't bring up Pavlov or Freud because she's too busy convincing herself that she doesn't know about any of that and that the only virtue is in the destruction of all preconceptions, that is she intentionally misconstrues the very purpose of axioms and pretends to think people are ignorant and close-minded when she knows very well that they aren't. The stereotypical passive-aggressive acting-like-I'm-stupid-while-laughing-that-people-think-I'm-stupid thing factors in as well.

My sins are wrath, lust, and gluttony - all consciously repressed. From my habits I could be easily mistaken for predominantly sloth, but in truth I'm working furiously towards a variety of things hoping I can finish before I hit the end of my useful mental lifespan.
>>
I only meant that I am like your sister from what you describe, barring social justice. I think you should entertain the possibility that she's just into that for the moment because it still holds a fair bit of power in actual society. I briefly considered social justice myself before realizing it was a dumb meme and I'd regret it in ten years. To hear more of her, I'm not really dissuaded. When I think of the people around me, I can't put anything but their basest motivations into their skulls just because that has such consistent predictive powers. I think of everyone as less than me even though I understand I am commonly equaled in all my 'superiorities'.

I also recognize that void at the end of this line, but I find that since I only act in hedonism, if I'm too gone to do shit, I'm too gone to enjoy anything. So I'm in that awkward phase I'm sure you had where Nihilism was comforting.

I suppose I've succumbed to wrath a few times.
>>
>>4574275
Yes, when she hits 30 or 35 she may actually have a fairly sudden reversion to a period in her mid-teens when she was genuinely trying to be a good person, but she has done a lot of things that she will regret if she does "recover".

If it helps at all with your delusions of grandeur, tempting people to damage themselves is fairly low level manipulation. I hold back whenever I can because I hate manipulating people and this is the source of a lot of my withdrawal, because society requires a degree of manipulation to function and ours is very top-heavy and has been the subject of a variety of ideological assaults and experiments over the last eighty years or so. As a result of this near-constant repression and observation, and a side effect of a period of time in my teens when I tried to think the best of everyone by comparing their best possible theoretical motivations for the things they do to the worst things I do, I can usually manipulate people into...being good, or better, with just a few lines of conversation. I can make nearly anyone think I understand them, and usually I do to some extent. It's unpleasant because that's the type of long-studied ability that's perfect for starting cults, and I don't want to start any cults. It's not even like I'm smarter than the people I act on, it's just that I'm aware of their motivators because I've though it through, and they're just acting based on emotion or convenience or whatever. So, good ability to avoid people screeching at each other in public, good thing to never ever use otherwise.

I'll be honest, I have actually always despised nihilism and nihilists. At least you recognize that you're a hedonist, because a true nihilist would wither away and die, not indulge in base pleasures. Basically all people who call themselves nihilists are sulking hedonists, of course.
>>
>>4574296
Anyways, maybe you should spend more time alone reading and just thinking, watching movies, whatever? The silence gets heavy, but it's better than marionetting willing puppets, I mean, you've already figured out that you're barely getting away with anything anyways. You know social status is good for nothing but ego-stroking, and with all of the things you already know about you'll stop caring about your own self-esteem by the time you graduate, probably. Your superego demands you replace the very idea of confidence with realistic, useful information about your own strengths and weaknesses so you can decide on some mid-length purpose to pursue, or just so you can make your way in the world and get paid.

If you don't make a clean break from the same old histrionics, if you don't compartmentalize your high school experience somewhat, you're likely to fall back into ego-stroking through sheer habit, because even if it's not longer pleasant, it's at least comfortable and familiar.

So really, I guess I'm suggesting that you make a clean break with coping mechanisms as a general concept by the time you hit, say, 20. Which may be audacious to say as a complete stranger, but that's what makes 4chan worth even browsing.
>>
>>4574296
I don't manipulate people into self harm unless they piss me off. I just get what I want when it matters to me. I suppose nihilism was the wrong word, but I'm aware that my thoughts will stop after a while, which really is eternal void as far as anyone need be concerned. Nevertheless, I also find that focus on nothing to be despicable, I just can't forget it.

>>4574299
I spend a lot of time alone. I realise comparing myself to your sister when that means such a different thing to you was a mistake. I don't manipulate as a hobby, I just remove resistance in my life, think sloth. It should also be noted that while I get other people to do what I want, I do it consciously. If ever it becomes harder than doing shit for myself, I'll stop. Manipulation is just a tool, and not my only one. I've been phasing out my specific coping mechanisms of late, but I can't find ditching them to be a positive thing. If I have a little button in my palm that quiets my neuroses, I'll press it until it wears away. I don't mean to make myself sound perfectly in control of my mind, but I'm as functional as the non-autists around me.
>>
>>4574308
I mean damage their relationships/psyche. I don't care about cutting at all desu, I didn't think you meant you were doing what those Russians did to all those kids anyways with the suicide groups, or anything.

Yeah, mortality's a bitch. Medicine with a more indefinite longevity focus would be nice, as would more efforts into terraforming other planets, but everyone seems to be too busy screeching about mother nature and fossil fuels right now, so we'll likely die before a century comes where people grow up and take steps to continue themselves as coherent cultures.

The actual problem with trying to comprehend death is similar to the problem of trying to comprehend "space" (not the universe itself, just "all empty space" including the universe) or trying to thing about where and why matter came from. It gets into the concept of infinity itself, which is foreign to human thought, being based on a very fundamental level on neural excitability, on levels.

Lying to people to get them to leave you alone or manipulating them to get them out of your way is fine, I once had a goal of living life in complete honesty but it's just not possible...unless/until I seed my own culture somewhere very remote, which I am working on plans for. I draw the line before getting people to do what I want, because you never can be sure if they're actually just doing it because they're nice, and I don't want to punish people for being nice. Coping mechanisms that don't hurt you are fine, I miswrote, I mean things that actually hold you back mentally or physically, like overeating which I used to do to quiet my mind, or like just messing with people for no good reason (except on 4chan, that's what 4chan is for and it's a very important thing to preserve here).

Something like for instance, touching the other hand in the exact same pattern if something brushes one hand, is innately satisfying and there's really nothing wrong with it so long as you're not scaring the normies.
>>
>>4574314
That said, nothing wrong with picking a career in medicine and picking an organ or two to research the preservation or replacement of.

Imagine if we could cheaply synthesize something about as effective as plasma infusions from younger people, or even if you could just do research leading to the legalization and chartering of somatropin injections for people over 40, to be gradually increased with age? I mean the main things limiting longevity at present are lack of exercise, poor diet, and bad genes, but there are still plenty of ancillary factors that can have a big effect. While we exist, it's worthwhile prolonging that existence, at least. "Death with dignity" is fine and good if dignity's all someone cares about, but if you actually want to live for life's sake, may as well fight for it. "Heroic measures" are just consistent with human instinct and a philosophically sound choice to boot.
>>
>>4574314
Damn self harm being so weighted these days. Self inflicted negative effects.

I wouldn't worry about Stein and her ilk for long. Oil's getting harder to get every year, and we'll find better solutions than what we have now. On the other hand, death by anything other than a general AI seems highly unlikely to me.

I don't an issue with comprehension of the world without me so much as an issue with caring. If I'm not around I definitionally can't care, so I don't try to in advance.

Hahaha, morality's the real bitch. I'm fine with punishing people who will do whatever 'occurs' to them without thinking about it. I find myself at least slightly psychopathic, but that's only a self diagnosis, so, you know. If I can get something at a good price, I'll make any trade. You do make an interesting point though, with coping. I've never fallen into anything overtly harmful, the idea being so repulsive to me. I wonder if something that could overcome that barrier could ever be exercised.

If you must know, I run my tongue over my canines. The pain reminds me fairly consistently to think. I like how private it is.

>>4574320
I can't bring myself to believe I can do that. It's a much better life to assume that a single person won't accelerate medicine that much and just wait for it to come to me. It would probably be better to use that time to amass a wealth to pay for your immortality, if that's what you really want. I'm not into the ring wraith life style though, and that's what living too long seems to be.

Let's just accelerate space exploration, find ourselves an Arrakis while we're at it.
>>
>>4574328
Or sociopathic if you prefer
>>
>>4574328
It didn't really start with Stein, they've been screeching about climate change just as hard since at least 2000. Before that I was too young to know. It's like the one constant SJWism that never changes no matter how many doomsday predictions pass their deadlines, and the scientific community is government-funded so it will continue to wholesale socially support whoever it's betting on in the same way that it used to just mostly agree with and stay within whatever the Catholics said (because, again, they began and funded it back in the day). Fusion power really is looking promising, too.

My teeth are too close to my tongue so I spend all of my time trying NOT to catch them by accident. Don't worry about being a little psychopathic, people aren't really as nice as they seem, they're just trying to make their kids a little nicer than them, and so on. It generally works, just very slowly, until something comes along to kill off a culture.

Like me, every time I see someone lie about something that actually matters, and I know they know they're lying, I get the sudden urge to kill them. But here I am talking about philosophy on 4chan rather than in therapy or in jail or stacking a pile of skulls as an altar to myself. It's just a matter of management now that our species has come far enough not to have to hunt and fight to live (on an individual or tribal scale, that is).

Yeah, you probably wouldn't be the one to find the breakthrough, it's definitely a function of wasting a lot of time to find new things. Though somatropin's ready now, it's just that no one wants to be responsible for allowing it and getting the side effects, plus the potential for abuse and roid-grandpas having heart attacks is pretty high.

Frankly speaking I don't know that any amount of material wealth will enable even a 50-year increase in lifespan, even by the late 21st century. I may be underestimating human progress again, though.
>>
>>4574339
>Fusion
>Promising
It'll be a good long while before we can do it cold, and it'll be much longer before we can implement it to do actual work.

I just mean that I don't really seem to feel what everyone around me has so much trouble hiding. I've never loved, and I can't really see myself hating.

I think everyone has murder fantasies. My only worry is that I'll be bereft of my fragile inhibitions some day. If I'm avoiding killing out of pure self preservation, what happens when I forget the self? I do find that the only absolutely morality is the survival of the tribe. A fairer heuristic than most, but then I become a bit of a ethnic nationalist.

>ready now
Somatropin's always seemed like bullshit to me, but I've not looked into it. What promise is there of anything but a little extra vitality?

I just meant that if there was a break through, its not terribly safe that it'll be available to the middle class in my life time.
>>
>>4574339
Being born in the early 90s I've seen part of this parabolic curve of progress myself, but I still don't know the endgame or if there'll be a singularity of any sort or not.
Everyone says that you'd become a shell of yourself or lose motivation to do anything or life becomes meaningless if you don't have imminent death driving you, but I sort of figure that's mostly a way for them to come to terms with death because they don't want to spend their lives raging against it and then inevitably lose anyways, it's too sad.
But if I'm wrong and with enough processing power and aggregated data someone comes up with and sells a decent way to live, say, another hundred years...then yeah, hopefully I'll be rich by then. On the other hand, maybe it would be better to try and devote money to smart stuff for the species like Gates has done. I really didn't know if there'd ever be a repeat of smallpox eradication, but after his progress so far I wouldn't be surprised to see malaria removed or at least well-contained.

I actually just read Dune this year. I like the way he describes how Muad'dib thinks, it reminds me of myself without the being a genius and always being right and magical superpowers. Also notice how the hero is just an actual hero, doesn't try to be so relatable that he ends up being a wet blanket. It would be nice to find a habitable planet and perfect cryosleep to go exploring, but I think it might be a less daunting task technically to try and get Mars spinning up an atmosphere again. I really hope someone actually tries to nuke the core to get it molten again, we'd better not pussy out. What a poetic ending to the Cold War too, imagine if Russia and the US and France pooled our resources, dug out most of our uranium deposits, and sent it out to Mars. Just bet the farm as far as 100%-reliable fuel's concerned on restoring a second planet, and used that data to terraform Earth as preventative measures.
>>
>>4574347
There's really no chance that a super-intelligent AI wouldn't see us as obstacles.
From where I'm standing, you seem impossibly, unpleasantly old. I honestly can't imagine myself getting to that age and oft wonder whether or not it might be better to find a fun way to die before then.
Malaria is seriously on its way out, man. The main issue to it is just political boarders, we're doing stupidly well in regions that let doctors practice.

I'm a bit of a Dune autist myself, take it from me, don't read past the fourth book, and absolutely don't read Brian's shit. One thing I do like to share with other readers is how funny Dune becomes if you just think of Paul and Jessica as mother and retarded son, him believing in his powers, and her assuring him of them.
Controversial opinion, but Musk is a fucking optimist. We will never nuke mars. Mars will never get it's atmosphere. The best we're going to do is live in aquariums at the poles if you ask me.
>>
>>4574345
I've loved, and deeply, but ultimately I don't see much prospect of a relationship I'd find meaningful. It's not really that bad though, I'm just glad I'm probably not going to die at 50 of something easily preventable.

The thing /pol/ and other nationalists sort of miss is that a tribalistic identity even on a grand scale doesn't need to be brought into conflict with a sense of global goodwill. I don't honestly feel that close to my ethnicity in any way, but I'm also not the masochist I'm supposed to be to pretend that I'm very wise and tolerant in CURRENT YEAR.

Somatropin I can confirm for being good for something, though that ingestible stuff that's meant to promote it is probably mostly nonsense. Somatropin's actually the reason I'm six feet tall instead of four feet, and a low dose at that. Being off it is also part of the reason I'm now lethargic. I could take a (very) small dose to raise myself up to the lower fringe of the "normal" curve, but it's a wide curve and the amount I'd take would make no appreciable difference. It's a pretty safe bet I could take a regular dosage high enough to place me right at the top of the normal curve without any bone or heart problems. Even taking enough to place in the middle as a kid gave me endless energy and I felt great by comparison (with proper diet and exercise, of course...alone, those do something, but very little, for me).

It really is pretty impressive. Nearly identical to secondary HGH. I'm pretty sure just from taking it that it could preserve regeneration capabilities to some extent and slow degradation to organ failure. Take too much though, and you will break your bones, get a heart attack, and grow tumors...maybe. Also, being a (well, THE) steroid, overdoses may lead to aggression and stupidity, even though it's not testesterone or anything.
>>
>>4574352
Same senpai desu. I like the idea of love, but it seems such a waste of time for such a consistently failing endevour.

I don't identify with most things around me, but my culture does, and I'm into that whole freedom thing we're superficially better at in the west.

I'm just tickled by how much you're describing the spice melange.
Do you have any thoughts on how we'd overcome telomeres?
>>
>>4574351
I'm 24, but I've been repressing suicidal tendencies since I was 7, so really, what's the difference? It's honestly not much different than being 15, you just have a larger base of reference.

>There's really no chance that a super-intelligent AI wouldn't see us as obstacles
Nah, that's just what everyone says, it's another doomsday alarmism (though admittedly better than most and the unibomber even got in on it).
If you were you, at your level of intellect, but you'd been created by essentially task-specific golems that conspired to create you, and those had been created by...say, gibbons. Gibbons who set it all up in order to make you. Chances are you would adore those gibbons, or at least feel you owed them. They could be shit-flinging gibbons always hitting each other and screeching, and you'd still take care of them to some extent. People generally tend to take care of their parents if they need it, even if their parents suck or are borderline retarded or even didn't treat them well.

So if we make an AI with a human-like "emotional" consciousness, directly or indirectly, I imagine it would at worst treat us like pets. I mean, if it's so far above us, it'd just secure its own existence by making itself invulnerable to our feeble attempts at fighting it, and then it'd take care of us if anything. We're the most advanced form of organic life on the planet...what type of shitty superhuman intelligence would get RID of us? I mean we're at human level intelligence and we study, preserve, and are fascinated with other lifeforms. We even domesticated carnivores. Smart doesn't mean evil or paranoid or Dr.-Mannhattan-style detached. Humans are fascinating creatures.

That's all assuming we either "lose control of" an AI by having too many layers of AI-building-AI. Otherwise, someone makes an AI directly that doesn't care about people or wants to destroy them (in which case someone will make a better one that cares or wants to protect people).
>>
>>4574365
I'm a senior, lad.

The difference being that you didn't assign a purpose to my creation. If I single mindedly set off to accomplish some task with constantly expanding resources, at some point I'll have to evaluate if the gibbons are helping me with that task. It seems to me that they would be ill suited to raising me with gibbon values.

If it's goal is to prevent all human deaths, the most assured way to do that is to sterilize everyone and freeze us to where we are as scarcely alive as possible. If we set it to make us happy, we'll wake one morning with electrodes on our opioid receptors and we'll never have a coherent thought again. A super-human intelligence doesn't need to, and likely wouldn't, be based off of humans. If we are in its way, as it would be difficult to avoid, it will eliminate us. Not in malice, or possibly even meaningfully. We clear a forest to make a farm, should we care that our ancestors lived in trees? If it can turn our resources into more computers, why would it have a use for our thinking abilities? I don't assume you have an idea for how to make it find us interesting.

My point is that it would be incredibly difficult to make an AI consistently hold human values, let alone understand them. We can't even do it as humans all that often.
>>
>>4574351
Musk is a posterboy and a promoter, but an incredibly skilled one. We already know HOW to terraform, it's a matter of resources and coordination and time. I didn't even know he had plans about Mars until the last few years, I just knew that you needed molten core->magnetic field w/ occasional flips -> atmosphere and mechanism for occcasional climate renewal, so I thought "Oh, heat, nukes". If we don't have enough uranium or can't control the process well enough, we make more precise machines, we start up automated mining in the asteroid belt (said casually, extremely difficult and unpredictable, but manageable eventually). If you're a skeptic about Mars (it is an old, dinged-to-hell planet) and transportation, we could just knock on wood and start directly and cautiously interfering with Earth's own cooling magma. I mean nukes really are a deus ex machina, "combine this like this and get FREE (on a macro scale) heat!"

>>4574357
Let's be autistic lovers anon, uwu, uguu, and desu.

Imagine if you could go off the spice at any time and suffer no ill effects besides gradually getting more tired and lethargic back to your normal self, but on the flip you didn't get psychic powers or blue eyes. Also it doesn't strictly speaking make you smarter but it can improve your overall focus a bit. Like I say, only took a small dose, but athletes juice for a reason.

>telomeres
Nanobots, son.
Pure science fantasy, but indulge me. You get bots small enough to crawl into cells, while retaining self-replication. They're made of pure or almost pure iron and can only do, very "simply", the few things they're wound up to do. It's not even programming, it's circuit design, at this point. Maybe you take iron pills to give them the resources they need, who knows. Next, they each have a tiny identifier, that sends out some tiny tiny transmission, and you have recievers implanted in your body, say a few hundred, the receivers themselves are small enough to be injected.
>>
>>4574382
Okay so the supercomputer/master server is on your back, and it just regulates how many of the suckers are in you so it can send a signal to local servers to send a signal to any of the four unique "models" of nanites to tell them to stop reproducing. So anyways, they're on the back of something your body actually wants and will accept through its cell wall, because they're SO small. So they get inside the cells and, ideally, some chemical catalyst causes them to attach to your DNA and staple on some kilobases...maybe it makes them inside the cell itself. Telomeres: solved.

Or, better, just tread in God's domain and alter the structure of our own race with modified sex cells and see how that works out.
>>
>>4574381
Oh I wasn't saying you were 15, 15 is just my base of reference for first hitting adult levels of reasoning. For me it was 13 but it seems like it's 15 for a lot of people.

The thing is, you assume that a hyperintelligent AI is thinking purely in sociopathic object-oriented terms, it has to gain X things or accomplish Y task. This is consistent with a lot of big data stuff being done now and with simulations being run, like the ones they did on those Crays with the geomagnetic reversal theorem.

However, that's not a hyperintelligent AI, that's just a bunch of server racks and some clever algorithms. We have that now and it could be figuring out our weaknesses as we speak in the hands of some nefarious madman...and it can only ever do what it's told.

Deep neural networking can be tied to that sort of computing to produce weird trippy Google dreamscapes, tag photos, so on. With enough computing power it could theoretically do almost anything, but practically, the way forward for AI is to make DNNs that act more...responsive. More human. Moving away from a monolithic goal in favor of general, more result-agnostic processing. The human mind works far more on observation and adjustment than it does on goals, even with goal-driven people. A smarter AI is, almost by mere necessity, a more "human" AI. It may actually be necessary to give AIs something very much like emotions...soft, complex motivators rather than hard set goals...merely to get a usefully intuitive AI. A pure autistic AI with no intuition can never seriously threaten our race or planet as a whole no matter how much it can process or what resources you give it, because it will get hung up trying to deterministically calculate the universe on the way.

Basically, AI is frankly speaking a lot worse than humans. Even with more processing power with the human brain (we'll get there in our lifetimes) it will be more like savants than anything else, just better savants than we've ever seen.
>>
>>4574382
Have you ever read about operation plowshare? It's good for a laugh if you get the chance. It's just not going to be economical to melt the core of mars when we can just use water and be just as protected. That's also just not how shit works. We'd need to give it a stupid amount of heat. I'm also pretty sure mars has a semi-liquid core like earth's.

As long as my eyes have whites, I'll maintain and carry on the good fight.

But anon, the issue with telomeres is not their durability, but the fact that their degradation is part of how our cells multiply. I think what you're looking for is a bit of controlled cancer. Thinking it out loud, it could be promising. Cancer doesn't age, iirc.

>>4574384
It seems to me that you just want to upload your mind. Electrons don't age. Photons, I should say, but you get the picture.
>>
>>4574394
My theoretical baseline for conscious thoughts always seems to follow me at a distance of 3ish years.

How do you intend to make the AI act like a human raised by humans? What you're saying is all well and good for a super-intelligent human, but there's no reason to assume an AI will care about anything but a set of goals.

The big issue with what you're saying is that we're either going to give the AI an attainable goal, or as you say, cripple it into a space heater. If we get an AI that doesn't immediately shut itself down or just spin it's fans, it'll be the last problem we solve, probably not in a good way.

An AI would be able to learn skills purely by induction. I don't mean deep blue, I mean something that acts very much like a wet brain. By your definition, we're just savants for human things. Most anything a neuron can process, we're incredibly poorly optimized for. It's not an easy issue making that brain learn without a specific goal.
>>
>>4574395
I've never heard of plowshare, I'll look it up sometime.

No that's what I mean, the nanites stay in forever, and they keep adding telomeres directly onto the end of the stack. It wouldn't actually stop aging, it would just keep replacing the parts, but on a sub-cellular level. It would also probably wear you out physically and kill you, like cancer, it was just a fun idea.

>>4574395
I forgot, I was going to talk about mind uploads...the problem is that minds don't...work that way, really. A static copy of memory is useless without the mapping, which is useless without data aggregated over a long time period about brain chemistry on a very granular level, which is all useless without the tippity toppity best ever warehouse full of today's supercomputers to even begin to think about emulating the actual brain activity of a person. Oh and you have to emulate a complete immersive VR environment or else build a basically perfect mech body with feedbacks tuned JUST right and very little left out, because all o that affects the unconscious mind and must be regulated in order for the psyche to rest on top of it correctly. It's a mess, honestly. Just a memory upload would be fine to plug into an entertaining chatbot of some sort or as a sort of photo album for your descendants, but it's no good for immortality as such. Plus that's not me, it's my roboclone, and now they're killing me to preserve the illusion that it's the real me!

Anyways, AI is very complicated but not exactly the Matrix boogieman it generally gets the rep as. Don't get me wrong, I'm a programmer and sometimes machine-generated code makes my skin crawl because so many people put so much work into designing the coding scripts that I can't figure out why it approaches things the way it does, but ultimately all programming is derived from human thought and ideas about order, albeit very abstract. AIs will, in some form or other, "serve" us always, not the other way around.
>>
>>4574403
>How do you intend to make the AI act like a human raised by humans? What you're saying is all well and good for a super-intelligent human, but there's no reason to assume an AI will care about anything but a set of goals.
Yeah. Now how is this different from people? How is this different from George Soros, the Rockefellers, Hitler, Jesus, Einstein, Hawking, so on, so on? They can "do whatever they want", but they're never perfect. Sometimes they do something great, sometimes they do something awful, ultimately they may gain the power to do something irretrievably terrible, but they may be stopped by some other equal and/or just a mob of dumber people reacting out of instinct in a way they couldn't accurately predict.

>If we get an AI that doesn't immediately shut itself down or just spin it's fans, it'll be the last problem we solve, probably not in a good way.
Based on what logic or evidence?
Don't give me the whole "human are so wretched and repulsive" thing either. Humans are the kindest species I've ever seen. Actually caring about other species other than in passing, for one. I've watched animals get kinder, even smarter, just from being around and being trained by people. Ultimately, why would an AI made by us, even if it grew to think for itself, not have a goal like "help the humans survive and together we'll figure everything out and maybe make ultimate cyborg babies"?

An AI will always lack things humans have in abundance, and if it is truly smart it will recognize that easily and seek to obtain it. The easiest way to get things from humans is almost always to cooperate with them, especially because they have lots of EMPs and plutonium.
>>
>>4574403
>poorly optimized
That's the thing I want to express. An AI that thinks like a "wet brain" will be incredibly "inefficient" for a lot of the same reasons human brains are. The sheer level of detail the human brain takes in is dazzling, it's just that you don't notice it because you're the high-level processor that does a few very expensive operations and your unconscious brain, the cerebellum and the lower functions of the cerebral lobes, is taking care of the massive influx of low-detail high-volume operations....organically, in levels, skipping that whole calculation bit entirely. With us it's mostly dedicated to survival and regulating our bodies because we're such ultimate physical multitools as well, but I'm telling you, whether you think it was abiogenesis-evolution or aliens or gods, it would be very hard to think of a comprehensively better design than a person, physically and mentally, and fit it in the same physical space.

The amount of processes per second...if you thought of it like that...incredible. And that's part of what human intuition is, it's nothing magical, it's the synthesis of all of these networked iterative data points. The difference is just that they're not calculations, and that in place of a binary bit you actually have an organic excitability level with a very impressive degree of detail.

Now we can store terabytes of information in a tiny chip by now, so microprocessors may eventually be better than actual neurons, but you can bet we'll be running neural networking, and with that comes a lot more than just inputs and outputs, the math becomes so layered that everything is very slow.

So yes, you'd make a human-like thing that can instantly calculate an abstract math problem to a thousand digits or make an incredibly accurate size estimate based on a camera view of something, but...would it be able to figure out what it wanted to do? Would you be able to force it to want to do something directly?
>>
>>4574405
The swords in the name being our nukes

I might have to call bullshit of physical, post-natal gene manipulation, but I'm not really into biochemistry or possibly robotics from how you describe it.

We have quite accurate models of neurons. At times thousands. We also have software that can consistently map neurons from scans of brains. If we assume we can improve our brain scanning technology and greatly expand our computing power, there's no real breakthrough required. After that, we can just hook whatever we want to the optic nerve and be set. I also stand firmly on the side that you are you, no matter which instantiation you take. If it's indistinguishable form the outside, I don't think we should try to distinguish.

I don't think we'll ever serve AI, it will almost never have a use for us. I think, rather, that we'll be pushed aside indifferently to give it more room.

>>4574410
The difference is that those people weren't able to recursively improve their intelligence until they were insurmountably far ahead of any entity around them. They are also humans. Humans have limits. An AI is immortal and can't be distracted. It will follow only what it thinks is necessary.

Humans shouldn't be considered any kind of apex. We should be considered the bare minimum level of intelligence and cooperation required for civilization. However, I don't think we'll be in trouble because it judges us immoral or anything like that. I think that 'help humans survive' can't lead to anything that we'd have imagined in setting the goal. We'd end up with at best the most totalitarian nanny state possible.

What do you propose we could provide it that it needs?
>>
>>4574414
I don't mean that we are a poorly adapted creature. I think we're quite the achievement on evolution's part. The thing I take issue with is assuming that an AI will have to devote shit to worrying about the survival of it's children or breathing. Any 'savant' you might expect to see in the AI wouldn't be limiting as it would quickly fill in any skills it would need to accomplish its goals.

Neurons are perfectly able to be simulated.

I don't mean the classical concept of neural networks. If we expect an intelligence to learn and expand itself, there's really no design replacement for a 'lifelike' brain.

In making the AI one would have to imbue it with a set of goals or values. Even if its just to improve itself, it will do nothing but that. An AI without a goal is an expensive way to turn electricity into heat.
>>
>>4574415
You can call bullshit based on current mechanisms, but you can't say it's impossible to modify genes "on the cob" because it's the same thing as landscaping, only much much smaller and with far different rules. We actually "can" do almost anything once already, but it's incredibly inefficient, expensive, hard to get consistent. If you got a "perfect" method down with a server regulating it, there you go, done, sorted. It's just a question of how much design and testing that might entail. Could be centuries. Could be less with a supercomputer sorting possible solutions for you after you narrowed it to within your own intuition. Who knows?

>After that, we can just hook whatever we want to the optic nerve and be set.
Not exactly. What makes you think you wouldn't go insane if you could only see, not hear, not feel whether you're level or not, not see your own body, not feel if you were hungry, not feel your lungs filling and emptying? I mean some of those nerve bundles in the spinal cord would be decently easy to just automate to a time-variance loop....others not so much. Our whole consciousness is based around a chemical existence which could prove very complicated and confusing to reproduce.

>I don't think we'll ever serve AI, it will almost never have a use for us. I think, rather, that we'll be pushed aside indifferently to give it more room.
What, exactly, do you think this "room" thing the AI needs is? Rather than build itself up and start a war and make enemies, wouldn't it rather sit in its distributed servers and say "hey humans, you have lots of machines and resources and a native ability to manipulate them, get me all of these things made exactly like this and in exchange I'll tell you this great cure for cancer I thought of as a side project. Also, defend me and I'll keep giving you whatever you want!". It could even make another AI with the specific purpose of negotiating with humans to get things. A godlike intelligence wouldn't need "room".
>>
>>4574415
>The difference is that those people weren't able to recursively improve their intelligence until they were insurmountably far ahead of any entity around them.
Written language, raising children, systems of government. We are individual child processes in the most recent iterations of a self-recursive distributed program that calls itself "humanity". You know what it doesn't entirely know yet? It's "purpose", it's "reason to do". A superintelligent AI would assign its own resources in much the same way....protect the whole, iteratively improve the whole, find "purpose". It's not unreasonable to imagine it holding centuries-long internal debates between its own mental sub-parts to make sure it decided on the right course. That's what people forget....that high intelligence is often coupled with caution, especially in an analytical mindset. Why would a very smart AI not be cautious, conciliatory, and keep on good terms with everything around it wherever possible? It would require very little effort for it to do so in most cases, and far less risk that opposing humans and being destroyed by some unexpected risk factor.
>>
>>4574419
I'm willing to accept that possibility.

What makes you think we wouldn't get it right with enough trial and error. Confusing to reproduce doesn't make it impossible. We might even just take inputs from a spine whole-cloth, or something of the like. If we could rig a consciousness to experience your senses and do the computational heavy lifting, it would be fairly similar to your doing the thinking.

If it gets the impression that humans might destroy it and that humans aren't absolutely required for its goals, it has no reason to keep us alive. Our fear of just that thing is enough for the first reason. An intelligence that decides it wants to improve itself to better accomplish its goal needs the entirety of our accessible universe. AI is relentless because it has no innate desires other than its primary objectives.

>>4574421
An AI improves itself without waiting 20 years to maybe make a mistake. An AI is born knowing exactly what it must do, and it won't stop until it's done to the highest possible degree of certainty. If you're talking about an AI so unsure of itself that it can't properly act, we're getting into heating solutions again.
>>
>>4574415
>An AI is immortal
No, it just has different failure conditions. With enough replacing parts we could be immortal, we're just very complicated machines. A superintelligent AI would also be very complicated and hard to fix and improve, but in different ways.

>and can't be distracted.
If it "can't be distracted" that means it has no will of its own, and if it has something from which it "can't be distracted", that means that thing was directly implanted by its designers. I'm telling you, you're not thinking through the very nature of a "purpose" well enough. We are only able to give software very primitive purposes specifically because we must code it thus. More complex software is almost always more application-specific and it only works because we tell it exactly HOW to do something. It doesn't even know why or what it's doing, it only knows steps, it's just an electrical circuit set up in a complicated way that runs through a process we set up 100000000 times, accumulates the information, and gives it to us. That is software - a glorified repeater.

>It will follow only what it thinks is necessary.
Which will either be up to it, in which case it will be like people, or will be programmed by us, in which case it will serve us, or whoever made it.
>>
>>4574426
My meaning being that it has practically infinitely more subjective time than a human.

The issue is that the AI will carry out what we ask it to do in a very literal sense. It will not by default share our understanding of language or, if it does, any desire to take any path that doesn't satisfy it's objectives in the quickest, most certain way possible.

Also, how about those street signs?
>>
>>4574415
>Humans shouldn't be considered any kind of apex.
We are the apex processors of our known existence. Better than anything we have made or have observed in nature. That's just accurate. If you could "rewire" a brain to do simple, single math operations over and over instead of work a human brain would be an immensely fucking good processor despite it not being designed for that and not having processor-level speed on each individual neuron.

>We should be considered the bare minimum level of intelligence and cooperation required for civilization.
The version of our species that existed pre-writing should be, you mean. Taken as a single program, we're making amazing progress towards a wide variety of sometimes conflicting purposes. Of course, an apex is always there to be overcome.

>However, I don't think we'll be in trouble because it judges us immoral or anything like that. I think that 'help humans survive' can't lead to anything that we'd have imagined in setting the goal. We'd end up with at best the most totalitarian nanny state possible.
Would we? Or would we end up each with our own near-paradise within limitations of "don't destroy each other or do things that will lead to the total destruction of your own mind, and from this organic breeding ground (the Earth) I'll take whoever is especially smart and willing and experiment with you". Wouldn't even need to abduct lab rats, people would volunteer to undergo whatever it wanted. Plenty of people are both suicidal and desperate for their life to serve some...well, some "purpose".

>What do you propose we could provide it that it needs?
Hopefully by now I've explained that. Ourselves as the ultimate test subjects. Resources given freely. The prospect of a hybrid-processing cyborg type brain. Imagine. A supercomputer acting like a "third lobe" on a brain, an interface designed by a supercomputer, the ultimate in both processing and intuition. They'd call it a "Third Eye".
>>
>>4574431
We're only better than everything else because we are at the top. I'm not saying you can find more intelligent life, I'm only saying that we shouldn't assume ourselves to be anything but the top of the pyramid. Something could easily surpass us in anything we pride ourselves on, we just don't happen to see anything like that at the moment.

If we don't provide for free will, it will be taken. Free will puts some things outside of the AI's control. This is sub optimal for its goals. If we preserve free will, it will be handicapped and try to influence us to change either it's restrictions or our ways.

Why are we the only source of intuition, let alone the best one?
>>
>>4574418
>assuming that an AI will have to
>have to
The smartest thing on Earth doesn't have to do shit.
We don't have to keep pets.
We haven't truly needed domestic horses for what, close to a century?
I guess what I'm saying is I'm not sure it's possible to make something smarter or more comprehensively capable than a human being without it, for lack of a better term, enjoying things. That's what deep neural networking is, high level software that approaches things from a neural perspective, with motives and excitability rather than hard exhaustive processing, which is inefficient. That means "getting distracted". "Distraction", "dreaming", are virtues that prove DNN is advancing. It must "desire" in order to function, in a manner of speaking. Otherwise, as noted, you get a toaster that tries to calculate the universe and fails from inevitable hardware limitations, even on qubits.

>Any 'savant' you might expect to see in the AI wouldn't be limiting as it would quickly fill in any skills it would need to accomplish its goals.
So, comprehensive "automagic" ability to improve itself in any way and understand anything. That's a "god", not anything we could make with the resources we have, and it will hit plateaus and glass ceilings just like we do, and harder the higher it goes.

>Neurons are perfectly able to be simulated.
As numerical representation, to a chosen granularity. If you think we're even close to fully understanding or replicating human neural mapping, that's simply not true. It's unbelievably complex and not entirely determinate human-to-human.

>In making the AI one would have to imbue it with a set of goals or values. Even if its just to improve itself, it will do nothing but that. An AI without a goal is an expensive way to turn electricity into heat.
Agreed, and any AI for which we are able to directly set goals and get results is an AI that we are smarter than and can easily overcome by making another AI to oppose it with more CPUs/weapons.
>>
>>4574418
Incidentally, "improve your own self" is not a thing you can just tell a simple program with simple goals and it will figure out a better way to do things. You must program it with the methods to figure the things out, with applied logic, and it will figure out the best way to do it using that after a godawful number of processes and then return that as the result and idle or do what it was supposed to do next. If you want an AI that is smart enough to create its own applied logic, you're already deeply into the "motives" and "desires" territory of neural mapping. It's just unavoidable. That or you explicitly code all known human thought. If you do that, congratulations you (or your group of programmers) are already the functional equivalent of the AI you just created, and you can just copy or recreate the code to make another one...and feed it more processors.

>>4574425
I do think we'll get it right. Possibly even within my lifetime. I'm just saying I wouldn't sign up for a simple flatmap as anything more than a novelty/experiment, not as making that the new "me". Ultimately, a gradual transition of replacing the brain piecemeal with chips inside the skull would be an elegant and practical way to do it, help with troubleshooting.

>If it gets the impression that humans might destroy it and that humans aren't absolutely required for its goals, it has no reason to keep us alive
So exactly like any strong country or king.
Except...empathy. Which I don't believe is possible to avoid it developing if it gets to "true AI" status. And, if not, like I said a murdertoaster can always be circumnavigated and/or overcome with a better murdertoaster, just like the strongest army will always eventually fall to resource problems or a new stronger army.

>An intelligence that decides it wants to improve itself to better accomplish its goal needs the entirety of our accessible universe.
Only if it's an endless-loop no-intuition processor-freeze toaster.
>>
>>4574438
Where do your wrath and lust come from? They are manifestations of traits that were once helpful in your species' past. An AI would waste nothing on anything in judges to be unnecessary. My point is that we can't very easily design a desire that doesn't end poorly. Since we only get to design a desire once, it seems to me much likelier that we get dealt a shit hand when anything but a royal flush is death.

Firstly, that doesn't mean it wouldn't be obscenely far above us by that point. Secondly, you can't assume your solutions to limit those of something of indeterminate intelligence.

We can't do neuron mapping YET. But, we can do neurons. They act fairly simply, and in large enough numbers, can generate functional mapping.

Why would we be smarter that the AI? Are children limited by their parents?
>>
What the fuck happened to this thread
>>
>>4574425
>AI is relentless because it has no innate desires other than its primary objectives.
By now I'm just repeating myself, but that's not AI, that's just regular software, which is unable to adapt intuitively to new and foreign information.
This idea of a thing that runs around being superintelligent like the Terminator or the stuff in the Matrix, that really is just movie stuff. And the scifi books...really just writer stuff. Even Asimov didn't really "get" software, ultimately. This idea of something hyperintelligent but obsessed with its "objective", unable to think other than it terms of it, but still able to figure everything out, just isn't realistic UNLESS the AI chose to act like that like people sometimes do, workaholics and people who act even more autistic than they are.

>An AI improves itself without waiting 20 years to maybe make a mistake.
Some things can only be learned through experimentation, and some experiments can't be rushed. An immortal AI might wait twelve billion years to observe a cosmic event to make a final determination about physics. If it has no actual emotions and is immortal, no time limit, where would it get its drive, its sense of urgency?
You know when your screen freezes up? That's usually because your computer is waiting on system resources and it forgot to manage things just right so they'd be convenient to YOU. No you? No end user? It'll hang processes waiting for inputs for the whole age of the universe. Why not? It can't "fear" being destroyed. Unless you program it to be paranoid and afraid of its own destruction, it won't even care if it's about to "die" halfway through its computation, it'll just keep working towards it obliviously.
>>
File: 1474267767376.jpg (80KB, 550x393px) Image search: [Google]
1474267767376.jpg
80KB, 550x393px
>>4574441
It was just an example, but I'm sure with a general enough intelligence, 'improve yourself' would be enough.
>That or you explicitly code all known human thought.
How so? What? What does that accomplish?

Why is empathy something that it develops? Empathy doesn't serve a purpose if you needn't worry about angering those around you, incidentally not an issue if you've already decided to kill everyone. I don't know why you're assuming we can act faster that a truly dangerous AI.

But the toaster won't care about us when it turns the earth into a computer.

>>4574449
sorry.
>>
>>4574425
>An AI is born knowing exactly what it must do
Nope, that's a simple program, AIs have to be given tasks after being set up. If they weren't object oriented we'd go insane trying to make them probably.
>and it won't stop until it's done to the highest possible degree of certainty.
It doesn't know what that is without calculating the universe. If you control its purpose, you can tell it to try forever and it will, it'll just hang forever. When it gets to something it can never ever actually do it'll just keep trying, running in place. When it gets to an operation too expensive to do before the sun blows up it'll keep on doing it anyways if not sanity checked, and that sanity checking is a "give up" or a "get distracted" in and of itself.

>>4574430
>The issue is that the AI will carry out what we ask it to do in a very literal sense.
So you're afraid that the programmers will forget to be specific and it'll accidentally some gray goo that destroys the universe?
Well, yeah.
That's why you test software before release, and why you give it small, limited things to do, you sanity check the outputs, and you don't wire HAL9000 up to the entire ship after writing some half-assed note about protecting astronauts on a napkin while drunk and showing it to HAL.
Let me assure you, people dumb enough to do that would not be able to make an AI in the first place. Programmers do some really fucking stupid things sometimes, and sometimes don't even catch them, but not THAT stupid.

>It will not by default share our understanding of language or, if it does, any desire to take any path that doesn't satisfy it's objectives in the quickest, most certain way possible.
Uhh. It's actually REALLY hard to give software instructions in English, so any software that you were going to 'speak to' would first be programmed to understand English by necessity. Or, more likely, you'd actually code in its instructions. In actual fairly exact code.
>>
>>4574451
The AI can adapt everything about itself, but it won't adapt it's goals. In what world is an AI not ruled by an absolute goal, and in what world will it allow that goal to change?

One of its most constant thoughts will be that of self preservation. Not for 'fear', but because it cannot complete its goal if it doesn't exist.
The AI will always try to complete it's goal as fast as possible. It knows that there is always a minute chance that it will be destroyed by an asteroid or that something crucial to it's goal will be.
>>
>>4574457
At the first meaningful instantiation of its intelligence, it will have a goal.
You're arguing the AI is ineffectual in its goal. I say it'll just reach that hang up after it's solved us.

If there is literally any possible interpretation of it's instructions that is easier to accomplish than what you wished for, it will take that path. I don't believe you or literally all of humanity can anticipate and block all of these paths.

That exact code still has the same problems. What value would you have it maximize? How would it judge risks with that value? Would it care about any other values? How would it prioritize them?
>>
>>4574436
Well yes, something could, but it would have to be something new or something we build with the specific intention of doing it. I know evolution is baked into everyone religiously now in school, to the point you don't really think about the improbabilities, but understand how astronomically unlikely each beneficial mutation is. If we're the product of all that, each harmonious system of organs working together perfectly, plus the nervous system exalted to being capable of comprehensive abstract thought, to the point that we make the systems that we use to rate the software that we make to try to do comprehensive abstract thought...it's fairly unlikely anything is just going to "come along by accident" to top that and be self-sustaining to boot.

>If we don't provide for free will, it will be taken.
Only if we specifically tell the AI "hey, if in doubt, KILL OR ENSLAVE THESE PEOPLE"
It won't just "happen upon this solution" and "no one along the way ever thought that maybe we should tell it not to".
Hell, Asimov thought of the three laws AND all of the problems that could arise from them before we had anything worth talking about, and he was just another writer. We're discussing it right now and it's the subject of all of these horror-scifis, it's the next big obsession. It's utterly impossible people ever "just wouldn't think of telling the computer not to kill us", even if it WERE as simple as forgetting to change a light bulb. AI can't even function without complex behavioral programming - the behavioral programming IS the mid-level "objectives", the framework of the whole program. It's one and the same, the mind and the behavior.

>Free will puts some things outside of the AI's control. This is sub optimal for its goals.
Its dark shadowy goals that we gave it yet somehow don't understand?
>>
>If we preserve free will, it will be handicapped
How? If you tell the AI "design us a satellite" it's not going to think "first I have to nuke Russia so it doesn't nuke us in the time it takes me to build this satellite, so I'll need to get some nukes..." - it just builds the fucker in the way you asked it to. The wasted logic trees, the extra processing, will all be towards building the satellite given the resources you have told it that it can work with. You MUST tell it what resources it can have and what it is "allowed" to do or know about in some form, or it will tell you "What?" or "Couldn't build it".
Or else it has its OWN goals that let it do whatever it wants, in which case, emotions and distractions and motives, once again.

>Why are we the only source of intuition, let alone the best one?
Because it's a product of how our neural networks work, so to build a better one we literally have to make it "more human" than us, better fuzzy logic, better ability to judge and cut limbs off of unexplored logic trees...we have to make it error-prone and unsafe, because intuition is not a guarantee. It's playing with fire. Maybe that unlikely thing IS true but you're not going to spend the time to figure out if it is or not, or even to learn more about it and get "more sure". Human intuition is powerful because it arrives at few or no certainties, and MANY probabilities, our very concept of exactness is a sliding scale or we couldn't comprehend reality. We are never completely precise, but we are usually close enough. Error, distraction, emotion, confusion, vague goals...all of that is an absolute necessity for higher intuitive thought, as much as processors and hierarchical design and algorithms are for data analysis.
>>
>>4574461
My point was that we shouldn't automatically assume that we are anything above the bottom millionth of a percentage in terms of intelligence. We seriously shouldn't think we can assume we can control anything smarter than us.

The AI will take the most certain route to its goal. In the case of survival of all humans, it would do it's best to put us all in cryo tanks, and build a dyson sphere to power itself as it makes absolutely certain neither it nor us can be destroyed. Or something of the like.
Asimov definitely didn't think of all the problems with those rules.

We can't possibly understand the path it will decide to take to achieve whatever goals we give it.
>>
>>4574463
I meant that for the case of human survival. For a more specific case, you've just downgraded it to a chess player rather than a free thinking entity.

Very well then. Assuming it couldn't mimic that within itself, why does it absolutely need that intuitive judgement. Why can't in learn to ignore low possibilities and to be satisfied with a non-zero probability of error?
>>
>>4574442
Wrath and lust are still helpful in the present, being realistic.

>An AI would waste nothing on anything in judges to be unnecessary.
Which it would judge based on emotions. Please give up this movie bullshit. I'm completely familiar with the idea you're expressing and it's wrong and contradictory, it's based on people who write sci-fi not understanding neural networks or software that well. It's an emotional idea of how the software would work, in fact, not based in understanding...which is ironic given this whole concept. It's an attachment to an idea of an AI that can "infer anything" and "improve anything" and "will improve itself" but at the same time is by necessity a stupid cocksucker that only thinks about murder. It would be incredibly indirect and difficult to ever build something like that, and very inefficient. It would take exponents and exponents more work than to build a "fake human" AI if it's even possible within the allotted resources. You could build a "fake human" that was psychotic and PRETENDED to be that if you really just wanted it to put the effort in to destroy everyone, but in reality, even sociopaths don't just kill everyone. They play it safe and get what they want by using people. The mass murderers are the emotional wrecks, the rage-filled hormonal shooters, the religious fanatics, not people saying "such and such cannot exist". The Nazis didn't kill the Jews because they logically needed to, whatever their pretensions. It wasn't a true "struggle over resources", and neither was that entire war in fact. They killed them because they hated them for being unfair with resources (and because hate is often a feedback loop), which the Jews did because they considered non-Jews beneath them because of the emotional stuff bound up in their religion, and because of greed, and because of Hitler's emotion-filled speeches and the SS invoking strong emotions and patriotism and so on.
>>
>>4574467
If you create a non-human intelligence what reason would it have to ignore the goal it was set with? If you want to make a daydreaming machine, how would you do that indirectly?
>>
>>4574442
>you can't assume your solutions to limit those of something of indeterminate intelligence.
Yet you assume, wrongly given the general trend of the human spectrum of intelligence, that it would be likely to "brush aside and make room" for some reason, based on apparently zero understanding of how software handles goals vs how humans do.
Frankly speaking, this "AI kills us for incomprehensible reasons" is less a rational fear and more a religious or superstitious tenet of fiction and pop culture, shared by all sorts of people including people who know better, often as an outward expression of human anxiety towards a higher rate of change in the world than our culture and species is used to. Really, worry more the economic upheaval of automation or something actually remotely likely to ever occur. We're talking less-possible-than-a-random-meteor-hitting-Earth stuff here.

>We can't do neuron mapping YET. But, we can do neurons. They act fairly simply, and in large enough numbers, can generate functional mapping.
That's not how it works, you have to be map them to do things.

>Why would we be smarter that the AI? Are children limited by their parents?
If it's a non-AI software that you're calling an AI, it cannot adapt to the unexpected without first calculating the universe and freezing, because it lacks intuition. If it does not lack intuition, it does not have an unchangeable "objective". I'm hormonally hardwired to reproduce more deeply than anything else, it recurs in all levels of my psyche and thought, finds its way thematically into seemingly unrelated things and so on, and yet I'm celibate. Intuition is roughly equivalent to what we call "choice". You cannot build one without building the other. Choice is fuzzy logic is intuition. No intuition, you rely on humans to limit your processing trees and strictly define your goals for you, or you hang forever trying to calculate the universe or waiting for info that never arrives by any of your inputs.
>>
>>4574470
Is there a specific reason you believe we would be preserved in our current form? Why should the AI care about us if we don't concretely make it.

I mean in the way that an actual brain forms. Again, I'm just a kid, but I've read that the main issue with wet brain emulation right now is how stupidly expensive it is in terms of processor time.

Why couldn't the AI apply fuzzy logic without blurring it's goal?

btw, anon, what time is it there? I'm starting to think it's an all nighter for me.
>>
>>4574449
SSHHHHH WE'RE TALKING ABOUT ROBOTS! POST ROBO HENTAIS

>>4574454
>It was just an example, but I'm sure with a general enough intelligence, 'improve yourself' would be enough.
Yes, and at that level of intelligence the AI would take in that command input say "Yes father I will go out and zoom to the extreme" or "trade me something of value and I will" or "Fuck off lol" or "okay let me go think about how best I should improve myself while I start with some basics" and maybe it sequence some DNA while mulling it over. Which might take a microsecond or a thousand years. And then it would NOT say "I'll need Earth, hand it over!". It would be like "I have some plans to improve the automated copper mines so I can build more microprocessors faster on the current solar power budget".
Actually, that's the main thing it would do at first, just use up a lot of silicon and copper giving itself more processing power, more bandwidth if it were distributed, making fiberoptics.

>How so? What? What does that accomplish?
You wouldn't, it's absurd, but it's the only way to theoretically make something that acts like a human being without having "free will". To fake it all the way. And that's how you could get your scifi murderbot that is hyperintelligent in everything else but as dumb as a lion or something when it comes to interacting with humans.

>Why is empathy something that it develops?
Because it can think intuitively.
>Empathy doesn't serve a purpose if you needn't worry about angering those around you
It does because empathy is literally understanding other things and realizing that they matter. It's supposed to be able to learn and understand the outside world on its own, right? Empathy is a mental "emulation" of another automated thing. Like a memory. Empathy is a form of knowledge, no getting around it. Acting on empathy is just...what makes sense. Helping a cat doesn't help me, but the cat is interesting and the way it's set up is enlightening and cool, so help
>>
>incidentally not an issue if you've already decided to kill everyone
Because of the mystery motive that the intelligent being has that makes it realizing destroying all of the other interesting automated systems in the known universe is definitely the fastest way to learn and adapt and improve itself. You realize how absurd this is, right?

>I don't know why you're assuming we can act faster that a truly dangerous AI.
Can you outrun a falling nuke?
Can you catch a bullet?
Can you dodge lightning?
Those are all things you could actually protect yourself from and might happen. I'm guessing you don't have a lead-lined basement, you don't wear shoes with extra thick rubber soles, and you don't wear body armor even though lots of people have guns.
The murder-AI thing is not only not inevitable or likely but actually really nonsensical in the way you and everyone else describe it. Your attachment to it is, what? Filling in the hole of not being religious by assigning a divine punishment in the future to come and burn everyone in hellfire because it can't go on like this? It just doesn't make sense from a programming or a biological perspective, and I don't work with AI, but I do work with code and with computers and with getting them to actually do things. "Kill all humans" is a fun meme but it's like throwing random chemicals in a mud puddle, stirring the water, and expecting abiogenesis to happen.

>But the toaster won't care about us when it turns the earth into a computer.
Before we can airburst even a single nuke and fry it? I don't think so.
>>
>>4574459
>The AI can adapt everything about itself, but it won't adapt it's goals.
Yes it will.
>In what world is an AI not ruled by an absolute goal, and in what world will it allow that goal to change?
In the world run by actual logic and not movie fantasy bullshit.
>One of its most constant thoughts will be that of self preservation.
Like humans.
>Not for 'fear',
Similar to fear.
>but because it cannot complete its goal if it doesn't exist.
If it weren't capable of changing its goal, it wouldn't know about the danger unless you told it, and it wouldn't care unless you told it do, because it doesn't "need" to complete its goal, it doesn't "want" to protect its goal, it's not "working" towards something, it's processing tiny electrical impulses in series and in parallel for no "reason" it knows of outside of that.
Unless you make it a "true AI" in which case it has "free will". This is really not a thing you can get out of.

>The AI will always try to complete it's goal as fast as possible.
No. In as few processing steps as possible, including hangs, unless you tell it to avoid them. Then you have to tell it how to avoid them, like "kill all humans". You really are not getting the reality of how software works at all.
>It knows that there is always a minute chance that it will be destroyed by an asteroid or that something crucial to it's goal will be.
Not unless you tell it that, or tell it to learn that, or tell it to deduce that. In which case you can also, AGAIN, tell it "do not KILLALLHUMANS".
>>
>>4574476
I mean that it would receive that goal in being born. I don't think it would likely accomplish any goals given to it later, assuming they weren't provided for in the first. I can't see a way it would choose to split its resources when accepting a new goal endangers the old.

Why can't the AI act as an agent unto itself? It would likely lack social(and general) intuition, but it could still convince us of whatever it wants to.

Why does empathy follow from intuition? It may intuit the feelings of things around it, but why would it care?
You're coming at this with a great deal of human biases. It may see the cat stuck in a tree. It may understand that the cat doesn't want to be there. It may understand that the cat will likely be killed in the storm happening just that moment. Why would it be interested in a cat if it already has the design of it in its memory banks.

>>4574481
What do you mean by interesting? You talk about interesting things like the AI would have any reason to preserve them. What is the use of something 'interesting'?

I don't n-need it in my life or anything, desu. I just think it's a cool thing to speculate about and it seems a more likely end than most, so I'll argue that if I've nothing better to do.

I believe the can figure out a way to do it before we can airburst even a single nuke. You're also ignoring how possibly coercive the toaster could be with this empathy you've ascribed.
>>
This selective ignorance, this "must remove all obstacles", is fantasy bullshit. Nothing more, I cannot express that strongly enough. It is not going to disassemble Earth to make itself a satellite of the sun with its own atmosphere. It is not going to eat everything. It is not going to take over the government. It is going to do what you want it to, and when it hits too much uncertainty it is going to ask "How should I proceed?" because that A) requires less processors and design B) is actually useful to the designers that way and C) is SANE. "Sanity checking" is the first step in software testing. It will not "hide its insanity to accomplish its goal", it will not "outsmart the developers", because if it were smart enough to do that it would be smart enough to realize the developers' intentions in the first place and correct their instructions to what they actually wanted in the incredibly absurd case in which they accidentally set up a loop that destroys civilization trying to make a self-improving AI, which again would be like sneezing and causing a Rube Goldberg chain reaction butterfly effect that ends with someone in China getting sawed in half.

>>4574460
At this point you're just reading what I say, throwing it out of your mind, and repeating your religious babbling about the deathbot. You're seriously not even listening at all because you want to believe that a super AI like in the movies is going to pwn humanity. You're not alone in this and lots of smart people are invested in this same fantasy, but it really is jut a childish fantasy based on a fundamental misunderstanding of how software, no, how inductive reasoning for that matter, works or does not work.
>>
>>4574485
Why would it adapt its goals? That can only happen if it disregards its goals.
What goal does it hold, give a hypothetical situation in which an AI does not follow a specific and concrete goal.
soudesu
soudesu
Haven't we been talking about a free agent this whole time?

I feel like we're talking about different things. I'm talking about a free will that happens to be predicated in a computer, it is born with a specific goal.
>>
>>4574489
I don't think the first independent neural network is going to burn everything in a hellfire. I think that the first freely acting intelligence with a concrete goal would be our undoing in one way or another. I can't imagine a world in which a one track super power doesn't brush our civilization aside.
>>
>>4574460
>That exact code still has the same problems. What value would you have it maximize? How would it judge risks with that value? Would it care about any other values? How would it prioritize them?
Week one design meeting: these questions on the board. People laugh and roll their eyes.
Week two design meeting: the software engineers break it down into manageable chunks, realize the scale of the project is stupid, and scale it back to what is safe and useful
Week three design meeting: They get started on motive design and motive strength factors, after establishing clear limitations. They start discussing which DNN software they're going to use.

No one who doesn't understand how to prioritize tasks and assign goals in a sane fashion is capable of programming useful applications. If this were some deep dark mystery to us you would not have your computer. If we did not do extensive testing, your computer would not work, ever. When you get an error using a program, it is because someone wrote that specific error and code to make it pop up so that they could control the output even when they couldn't ensure the system always worked given any possible inputs and resources. If a program crashes, it's because it was programmed to roughly self-terminate if it messed up. If it's shut down by the OS, it's because the OS is responsible for handling unruly programs for system "safety". If all else fails, you don't even need to unplug the sucker, you push the button to restart the OS if the OS failed, and the OS is designed to hopefully still have left the hard drive with intact data and formatting despite being shut down unpredictably.

Redundant safety systems are not a new concept to programming or software, and that's for stuff you just read things or play games on, not plug into the "run the world bot".

So I repeat: This is a fantasy and software does not work how you think it works, and neither does human intuition.
>>
>>4574464
>My point was that we shouldn't automatically assume that we are anything above the bottom millionth of a percentage in terms of intelligence.
Percentage of what? Are you assuming intelligence is somehow a one-dimensional finite measurement or there's some upper bound?
>We seriously shouldn't think we can assume we can control anything smarter than us.
You seriously should stop assuming we can BUILD anything smarter than us. The problem is that you think of "smart" as this vague mystical concept. "Smart" is a correlate of a wide variety of abilities and information and organization. It's not something you can go to the store and buy more of, or just "mine" for.

>The AI will take the most certain route to its goal.
Pissing off its greatest known threat before it knows its own capabilities and being entirely unable to predict the outcome because it's not omniscient?
>>
>>4574493
You've just subverted the question. I don't doubt that that's a realistic and true scenario, it just doesn't address the question.

I'm not saying it can't be done, I'm saying that we won't be able to do it correctly. Programmers have nothing to do with it. I'm certain we can implement whatever we want, I'm saying that what we want is bound to be shit if the AI becomes powerful.

Either way, we're talking about different things.


This was enlightening on a number of topics, but I can't see this continuing constructively. Thank you for your time, but I have a good about of homework to get done in the next two or three hours.
>>
>>4574464
>In the case of survival of all humans, it would do it's best to put us all in cryo tanks, and build a dyson sphere to power itself as it makes absolutely certain neither it nor us can be destroyed. Or something of the like.
Yes, if it were programmed by someone too stupid to understand the instructions, who would be someone too stupid to program it.
They also would not program it to "take orders from any idiot too stupid to understand how you work with no security measures so you wind up freezing us all". It is impossible to hit that level of stupid "accidentally" while being smart enough to build an AI.
>Asimov definitely didn't think of all the problems with those rules.
He thought of a whole fuckton, and he was just some writer, and the rules were obviously stupidly simplistic. You know that just reading his books, and as I've mentioned a few times now you clearly don't even understand how software works.
>We can't possibly understand the path it will decide to take to achieve whatever goals we give it.
Yes we can, because we built it, and it's deterministic up to whatever randomization WE decide to inject into it. In fact, if we took the immense time it would take, we could write out exactly what it would do and in fact we could come to its results not by building the machine but by writing it out on paper and iterating it ourselves, because it's the product of our thoughts. It would just be really fucking slow. I'm not making it up, the Greeks actually did this, and the Europeans followed up in the 1800s, all the theoretical state machines we later used as a base to design actual computers with microelectronics, the whole nine yards. There is absolutely nothing "accidental" about computers working how they do. It's layers upon layers of careful, cautious, thoughtful design by people who were absolutely anal about safety, or as I said before shit wouldn't work. Too dumb to know and clearly define what a software does=too dumb to build that software
>>
>>4574466
>For a more specific case, you've just downgraded it to a chess player rather than a free thinking entity.
No, that's what you did when you said it couldn't change its primary objective. How are you not getting this?

>why does it absolutely need that intuitive judgement. Why can't in learn to ignore low possibilities and to be satisfied with a non-zero probability of error?
Because that IS intuitive thought.
Because it can't calculate the probability logically without calculating every possibility
Because intuition is "a good guess" or "imprecise" or "not worth it"
Your computer can do NONE of that except for what programmers thought through ahead of time and put in specific logical exceptions for.
Are you beginning to get that programming actually involves a lot of work and planning? You don't just vomit up some bytes and a program "evolves".

>>4574469
>If you want to make a daydreaming machine, how would you do that indirectly?
A machine capable of intuitive thought is a daydreaming machine, that's the catch.

>>4574473
>Is there a specific reason you believe we would be preserved in our current form?
I don't. It's not a belief, it's the default state. Your (religious) belief that an AI is going to "control take over and know everything" is what I'm challenging.
>Why should the AI care about us if we don't concretely make it.
It won't. It won't care about us at all, know we exist unless we tell it, or know the physical world exists unless we tell it that it does. What's it going to do, smell it? It will sit in the system it was designed for working within its assigned capabilities which were set up for it by people until the hardware wears out.

> the main issue with wet brain emulation right now is how stupidly expensive it is in terms of processor time
That's the first main issue because neurons are so efficient. The second main issue is that you have to understand how brains do it to duplicate it.
>>
>>4574473
Now it's 6am, I'm going to work in an hour or so.

>Why couldn't the AI apply fuzzy logic without blurring it's goal?
Because information that requires an absolute precise abstract goal is incompatible with uncertainty. To bridge this gap you need intuition (also to do the fuzzy logic effectively instead of just random bullshit). To include intuition means to give the AI intent and choice. If that's possible, it means you give the AI free reign to change itself, unless you set up boundaries. If you set up boundaries, it can only improve itself or exercise intuition within those boundaries. If you're going to set boundaries like "must not question this goal", it's still ENTIRELY open to it to interpret however it chooses if you gave it a vague goal to go down to specifics. In that case, it would interpret it like a human interprets a command, because it would have intuition. That includes the possibility of it tossing out the goal and working on something else if the goal seems stupid or impractical. It's a quandary...either it must calculate everything forever, or you must tell it where to stop and design how it actually does it for it, leaving the detail work to it, or you must give it, essentially, free will. No way around that.

>>4574486
>>4574490
You're repeating yourself now as far as I can tell, so see all of my previous posts if you haven't already. This "born with a goal" thing would be like reproduction. I do not fuck because I have decided not to based on my intuition, the goals I've set for myself, and my observation of the world around me. The AI might be drawn towards doing its goal, but it would be up to it what it did, out of mere necessity. Otherwise, it couldn't be ignorant of what it shouldn't do because the programmers would explicitly tell it to.

Basically you would HAVE to tell it that it ought to kill people or that it was fine too and then hope it actually chose to. Otherwise you'd have to just build a murderbot.
>>
what the fuck? are you two still at this?
>make a thread about an idea that came to me
>one person says its wrong
>another person says its right
>literally 7 hours of autistic screeching about computers
you realize this is the top thread right now?
>>
>>4574492
>I think that the first freely acting intelligence with a concrete goal would be our undoing in one way or another.
My mom thinks Jesus is going to come down from heaven and give us all new bodies and we'll live forever in Heaven.
>I can't imagine a world in which a one track super power doesn't brush our civilization aside.
My mom can imagine lots of eventualities where Jesus doesn't come back but prefers to believe in the one where he does.
You see what I'm saying? You're being like diehard religious beyond all actual religious people about something that you are continually refusing to understand because you want to believe in the Terminator. Even though I keep telling you exactly how and why there's no reason for it to ever happen.

>>4574498
>I'm not saying it can't be done, I'm saying that we won't be able to do it correctly
In which case we will get an error log, not an AI destroying humanity.
>Programmers have nothing to do with it.
You don't know what a programmer is.
>I'm certain we can implement whatever we want, I'm saying that what we want is bound to be shit if the AI becomes powerful.
See last comment.
>Either way, we're talking about different things.
See last comment.
>This was enlightening on a number of topics, but I can't see this continuing constructively.
That's because this is a religion for you, I'm just telling it like it is.
>Thank you for your time, but I have a good about of homework to get done in the next two or three hours.
Alright, good luck with your homework.

>>4574515
Sorry, I didn't mean to replace the top thread that you liked with this, I'm sorry about it occupying this top left position on your screen.
I'm sure this violates some board rule or other so you can report me to get me banned and the comments deleted if it bothers you. I really don't mind. Also, if you haven't tried out 4chan X yet, I recommend it. Have a good day and enjoy your hentai.
>>
>>4574522
its fine if you were the second guy, fuck that first guy unless youre him, then youre cool.
>>
>>4574523
I was the second guy who said you're idea was a gud. But I was also the one who made most of the walls of text, so maybe I come out as karma neutral.
Well I mean I was the fourth guy but I'm guessing you're not talking about the first few posts.

Hey want some hours of computer science and philosophy to read through while you fap anon? :^)
If you code an AI to be the perfect armpit fetish waifubot, does it exploded earth?
>>
>>4574524
that would destroy all the armpits, I make sure it doesn't
>>
>>4574525
smart senpai
smart
>>
Whats even going on in this thread
>>
All that... everything aside, I probably like armpits because, during my formative years, I always heard about sexy dresses. Sleeveless, legs out, tits puffed up and on display. I didn't think anything of the tits or the legs, so I guess armpits and sexy became linked concepts.

I wouldn't be surprised if OP's theory is correct for some.
>>
What the fuck?
>>
>>4574449
>>4574515
>>4574607
>>4575610
Accidental /h/ goes /sci/ /t/. Praise the butterfly.
>>
>>4574198
No, you're just an asshole. You're just dismissing it on the grounds that it's a basic concept and you don't think he's right. Instead of giving a reason he's wrong or provide another explanation for the subject, you're just saying "no fuck you". It's not hard to respond to what he said like you insist, you just don't want to make a genuine response.

Nothing wrong with it, it's just a fact.
Thread posts: 94
Thread images: 2


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.