ITT: post the dumbest questions you were afraid to ask before.
>anon nobody will judge you (recessive genes, worthless, slump)
gcc actually compiles itself 4 times. (Once with old compiler, then with the new-made-with-old compiler, then two copies with the new-with-new compiler. It then diffs the two final versions to check for errors between them).
What? No. there's c++98 and c++11.
-Ox is the optoimization level.
-O0 is no optimizations.
-O1 is optimizations that are ALWAYS safe.
-O2 is the 'standard' level.
-O3 and -O4 are increasing speed, but will make debugging hard to impossible, bloat executable size, and may even produce slower code (due to the larger executable).
-Ofast is FUCK THE STANDARDS, GIVE ME SPEED! It starts making assumptions that may not be true.
-Og is very new and means "give me the best settings for debugging", it's somewhere between -O0 and -O2 (but not equivlent to -O1).
Some things can be loaded into ram, right? Like when you boot up puppy linux, or some other similar distro then take the usb out, you can still be running that distro. So all the files are loaded into ram. Say if I'm watching a video then delete that file, It still plays because its in ram right? Is there a way to take a file that is loaded into ram, and make a copy of it onto your hard drive?
I work with Java on a quite huge project and i can say that design pattern knowledge is a must.
I haven't read any particular book about them, mostly separate articles about particular pattern, but i plan to read in future: HEAD FIRST DESIGN PATTERNS (as an intro book) and then classical DESIGN PATTERNS by gang of four.
And -Os, which optimises for space rather than execution speed. It uses all that -O2 options that shouldn't increase the code size, as well as some additional space-saving optimisations.
Of course it is possible iff the whole file was mapped in (which depends on the size of the file, the amount of ram in use, and the OS's caching/paging policy).
Trivial proof: Assuming the whole file is mapped into the process, take a physical ram dump, rebuild the video players virtual memory map, dump the file from that.
Of course, it's probabally not possible, since most of the video won't be mapped into the proccess' space, if it's even mapped at all in the OS cache.
How do people make games with graphics? I can't grasp the concept of it.
What do they connect to each other in a language? It just fucks my brain so hard, considering the only things I've done are some webdesign and C#.
Like, everything moves and shit
I'm no expert so take my comment with a grain of salt but I've found Head First Design Patterns to be very easy to read and remember. Haven't applied any patterns yet as I'm still only a novice
Many ways. You can use a library, like SFML or SDL, or go hardcore and use WinAPI / libX11. Then there's the choice of software drawing or using DirectX / OpenGL.
In the game, you obviously have a main loop, which (assuming single player) consists simply of processing input, doing all the calculations (moving everything, applying damage, blah blah), and at the end you draw the frame and send it to the screen.
so all this is calculated and drawn in 1/24th of a second? just how fast are those computers?
can you creat a 3D game only with c++ ithout using any engines or other stuff peopel before you created?
That's how it was for me and I hear it is for others as well. It's all about experience, it will be hard learning any language at first but once you do it's just about how much time you put into the language. You'll get better with more time and learning new languages will be easier after learning your first (it's all just syntax after that).
A new programming language
- fast as C
- safe as Haskell
- elegant as Python
Why has nobody done this? Every time some new programming language is released it is either slow, unsafe, or has an alien syntax. D came close to perfection, but then they tainted it with GC. They claim the GC is "optional", but there is no simple compiler flag to disable it. You just have to "know" that your code doesn't do anything to accidentally invoke it.
I realize that some of these things contradict one another. For example, a safety feature like bounds checking is going to be slower than the unsafe array access like C has. But what the language could do is allow optional bounds checking. There are several ways of implementing it. One idea might be an Array type with no bounds checking and a List type that does do it. Another could be a code attribute, like @unsafe. A third option is a compiler flag. I don't really care how they do it, but give the option.
Rust looks interesting, but I'm not sure if the GC is really optional like they say, or if it is "optional" like it is in D.
Yeah, computers are fast like fuark. If you grab some old games (eg. the original X-COM), computers back then were quite slow, so they used a bit different approach - they drew an image to the screen, and then tried to redraw only as much, as needed. So when you moved a unit, the game would only draw the unit in it's new position and redraw the background where the unit used to be.
But since computers got faster over time, no one really does this anymore. Most of the time, double buffering is used. Which means you have two "display surfaces" - one is displayed, the other one is hidden. Every frame, those 60+ times per second, you draw everything from scratch on the "hidden" buffers, and then flip them - sending the new content to the screen, and taking the old content to redraw next frame there.
Can you tell me what you like about D and Rust setting aside the GC. I wanted to learn one of these newer languages but I don't want to waste time learning one that I may never use. D seems like something I'd use but I'm afraid to learn it while it's still seemingly in development. Also why is GC bad?
The absolute minimum for non-retard coding are:
The std flag takes an argument like c89, c99 or c++11. Use whichever is applicable to you. Additional arguments that are a good idea are:
And before anyone thinks I'm talking out my ass, these are the flags used by the Jet Propulsion Laboratory (JPL). You know, those guys who landed a fucking robot on Mars. They had to ensure their code was correct because you don't get a second chance with that kind of shit.
Citation: See page 8 of
With double buffering, you have a separate "surface" for what is displayed on the screen, and a separate one that you can draw the next frame on. You only swap the buffers and send the next frame to the display when it's ready. When using only a single buffer, you risk having data sent to the screen somewhen in the middle of drawing the frame.
Using the old method could increase the speed if you're doing software rendering - since you're changing less pixels, you save on the time of overdraw. The question would be how much speed you can save vs. how much hassle implementing this would bring. But this is mostly for software rendering; I don't think there would be much change if you were drawing via hardware.
>Can you tell me what you like about D and Rust
They have strong, static typing. They are compiled to native code without bloated VM garbage. They have language features that allow for the most errors possible to be caught at compile time instead of run time. They have concurrency. There are a lot more reasons, but these are some big ones.
>I wanted to learn one of these newer languages but I don't want to waste time learning one that I may never use
Learning is never a waste of time. And if you don't use it, big deal. You can still have some fun during the learning process.
>D seems like something I'd use but I'm afraid to learn it while it's still seemingly in development
D is stable now. Changes made should be minimal and not program-breaking. New development consists of improving the compiler and adding backwards compatible features. You can safely learn D and your code will work well into the future. Rust is still under development, so I can't say the same thing for it, but D is stable.
>why is GC bad?
It's not always bad. There are many applications where GC is awesome. It saves time having to fuck with manual memory management. But, it has a performance cost. While GC is great for some things, it's terrible for high performance applications or things which need low latency like video games. So don't think I'm saying GC is universally bad. It's just bad for some applications. The type of applications where GC is acceptable have no shortage of language options. You have dozens of languages to choose from and more seem to come out every year. But if one is writing an application where you don't want a GC, then your options are pitiful. You have C and C++. There are a few other non-mainstream languages too, but if you want any chance of finding libraries, then C and C++ are pretty much it. The non-GC application space desperately needs better options and only recently have they started getting some.
The question doesn't make much sense. C and C++ are different languages. If you are using C++, then go with C++11. If you are using C, then you could use either C89 or C99. The additional features of C99 are very meh.
Also, C89 isn't the default. GCC's default is the GNU dialect which includes a lot of non-standard language extensions.
What I wanted to ask was,
Are there any particular reasons why I should choose c++89 or c++03 over the c++11 standard? And I'm assuming its preferable to not use the gnu option? I'm quite new & interested in c++ but we don't really learn about this in school... only java & webshit.
no go with the new shit
soon there is gonna be c++14 switch over to that as soon as possible
ten years form now people will barely use c++14 so in order to have a future adapt now!
>Are there any particular reasons why I should choose c++89 or c++03 over the c++11 standard?
Small correction. One of the C standards is C89, but it is C++98 for C++. There is no C++89. In general you should choose the newest standard unless you are working on a legacy codebase, or there is something you really dislike about the new standard and have some particular reason to avoid it. I say go with C++11.
>And I'm assuming its preferable to not use the gnu option?
Yes; at least in my opinion. The GNU dialect is only compatible with GCC and Clang (because Clang aims for GNU compatibility). It will fail with all other compilers.
>we don't really learn about this in school... only java & webshit.
Yup. Same situation here. I felt scammed. Everyone told me I had to go to university or I wouldn't know anything and would be a failure at life, etc. Holy fuck, what a fucking scam. I learned nothing I hadn't already read in books years earlier when I learned on my own. Not only is university a gigantic waste of time and money that scams people, but it's the fact that they've got 99% of the population supporting their bullshit and telling everyone that you MUST go to university to get a job and you MUST go to not be considered a failure at life. That is some fucked up shit right there.
on windows, when you do a memory dump of a process using the task manager, times you can find parts of the streamed file
but then you will have to deal with those fragmented parts, you can't really know/guess which part is/goes where
finally better to hook the process and dump on the fly
Hey man thanks a bunch, and I'm feeling you about university teaching the wrong stuff, Its like they only want to educate people into website making codemonkeys and not people who understand computing. But I'm sure that without university I wouldn't have understood so much or be motivated to learn stuff on my own so I'm not entirely ungrateful.
most of the times displays can show 60 image ever second (60hz refresh) (there are better monitors with higher refresh rates)
and you have a technology called VSync, this makes sure the application isn't showing more frames then your monitor can handle
with vSync of games can show up to unlimited fps if you have a good graphics card and CPU the number will be very high and the amount of calculation and drawing is done in 1/X second (X being the number of FPS)
it's all quite fast..
I always wondered what are the different compilers of C and why are they different.
Thne i wondered what is the C# compiler, can i run C# code anywhere other than my Visual Studio?
Now I am wondering if learning asp.net was a mistake.
All these are so lame that i never bothered to ask.
I studied multimedia at university and picked up programming along the way, however I have no formal computer science education.
What do I do?
>I always wondered what are the different compilers of C and why are they different.
The current big three C compilers are GCC, Clang+LLVM, and Microsoft's Visual C++. There's also Intel's ICC.
GCC and Clang are open source (GPL and BSD respectively) GCC is the classic open source compiler which is used heavily in the Linux world. It's also the compiler used for compiling the Linux kernel. Clang is sort of a newcomer heavily pushed by Apple. It's based on the llvm compiler toolchain and a bit more modular than GCC. It's now the major compiler for OS X and iOS programs and is quickly gaining ground on Linux and *BSD.
MSVC is the compiler Microsoft ships with Visual Studio. It's not bad, but it takes a really long time to compile. It only targets Microsoft platforms.
Intel's C compiler is the best optimizing compiler out there. Many games and engines use it. The only problem with that is that it introduces CPU vendor checks that only execute the optimized code paths when an Intel CPU is used.
I work in web development, so my experience is mostly with high level languages and oodles of libraries which do all the heavy lifting for me. I would like to learn from the bottom up, if you follow.
I suppose what I really need is first of all a crash course on the entire stack, from hardware to assembly to OS to software (or however it goes) and then a thorough meta-programming book on how to break problems down. I know how to implement good code in the languages I've learnt, it's when I get to systems with large numbers of objects and asynchronous processes I find myself lost. It's down to planning, I'm sure.
Because it's less shitty to work with than shared memory.
>Say if I'm watching a video then delete that file, It still plays because its in ram right? Is there a way to take a file that is loaded into ram, and make a copy of it onto your hard drive?
Deleting a file from Linux removes it from the directory immediately (so you can't see it any more), but if the file is open somewhere, the data will actually stay on disk until the file is closed. If you manage to find a process with the file open, you can look in /proc/<process id>/fds and extract the file contents from there.
>Thne i wondered what is the C# compiler, can i run C# code anywhere other than my Visual Studio?
There's Mono, which is a cross-platform .NET runtime, but apparently it's slow and has poor library support. But if you really want to do cross-platform development in C#, you could try targeting that.
>I have no idea how to effectively break down a problem into manageable pieces and rationalise an efficient solution to each part. Where can I learn the skills necessary to do this? There are so many 'learn language X' courses, but virtually no 'learn how to write a well structured program' courses.
This is what software engineering courses are about. You could try finding video lectures online or something.
>Thne i wondered what is the C# compiler, can i run C# code anywhere other than my Visual Studio?
Yes, C#'s compiler is csc.exe, located in C:\Windows\Microsoft.NET\Framework\v4.0.30319.
>Now I am wondering if learning asp.net was a mistake.
Nope, ASP.NET is great.
How can I call a function inside a python script from the command line?
I know it's possible using "python -c 'imports script; print script.function()'" but I want to call it like "./script.py function".
bjrarne's book is absolutely terrible unless you can stand reading someone's book whose mind numbingly boring personality and method of teaching is reflected upon in the book.
That guy probably watches flies fuck for fun in his spare time.
Don't get me wrong, I've skimmed the book and there is a lot of info in it, it's just presented in the most boring way possible and will put off anyone new wishing to learn.
> use bash
> write scripts in it
Personally I've found the margin between the point where I can be bothered to automate something in bash and the point where I switch to a full blown scripting language like Python to be rather slim.
I'm no python wizard, but in almost every language, there's a system variable/array containing commandline args.
You take that array, do a case statement based on what's in the array, and have it call/jump to the appropriate function/subroutine
Using OpenGL, there is a main loop, which is a while loop if you use C, and this loop will contain a call to swapbuffers() which switches the places of the backbuffer and the framebuffer thus displaying the previously hidden buffer and hiding the framebuffer that was just drawn to the screen.
This loop iterates as fast as the computer can iterate it, however if you have vsync enabled then the call to swapbuffers() will wait for the GPU to sync with the monitor and doesn't return until this is done, so at best the loop will iterate as fast as the monitor refreshes.
Inside of the aforementioned main loop are also functions that manipulate the models/meshes by use of matrices and vectors, however (and amusingly so) the computer will execute these functions as fast as the main loop will iterate, which is pretty fast, and that being so, it makes it impossible to do translations such as in a FPS where you can look around in first person because the sensitivity will be too damn high (in fact you will turn 360 degrees in an instant), and that's why most of these things are done by using a permutation of the system clock which is accessed by glfwGetTime() if you use GLFW (and you should), which ultimately gets the current time from a crystal oscillator on the inside of your computer.
Can't find a real dumb-question thread, so bear with me for using this one.
I'm currently on a Radeon HD 4870 and would like to upgrade little money (below 200 €, unless there is a really great deal around 250 €).
A GeForce GTX 750 Ti looks nice and will probably save me quite a bit of power (which is quite important at 25 ct/kWh), but I'm afraid it doesn't deliver enough performance to be worth upgrading. I couldn't find any benchmarks that tested the HD 4870 and the GTX 750 Ti under the same circumstances. What do you think?
> 188.8.131.52. Sub-commands
> Many programs split up their functionality into a number of sub-commands, for example, the svn program can invoke sub-commands like svn checkout, svn update, and svn commit.
Sounds like what I'm looking for, thanks!
What do prepared statements have over this?
what the fuck is an hashtable? i know what's an hashmap.
How the fuck do I actually code? Not like writing code, but actually putting things together the way I want them? I know how to write code, but every time I think of something to do, no matter how simple or complex, I can't fucking reason my way to a conclusion.
It doesn't feel like problem solving, either: problem solving gives me a problem to solve, programming is working backwards from solution to problem and back again, and it makes my mind turn to fucking mush.
It's not imperative that I know, I already switched away from CS as a major, I'm just curious if anyone can give some insight, maybe I can pursue it as a hobby.
I went over your post and I was deeply thinking about things, and you gave me one damned good idea.
And to that I'll try to answer your question.
A Hashtable is a data structure similar to an array (And it is, to some sort). To access the data, the "key" is hashed into an integer, which determines which element of the array will be chosen to access the value.
The point of the hashtable is to access data "faster", to the cost of more memory used.
Keys are hashed with any algorithm the implemeter wants, there are a few common ones. Basically, take in consideration that you want to fill the table evenly, so most keys should give you a different hash.
When two different keys give the same hash, that's called a "collision", and it can be solved by either making a bigger hashtable, or having multiple values in the hashtable. Some C implementations have pointers to linked lists so that any amount of values can be stored.
Anyways, less collision means good hash algorithm.
I think I said most of it.
It isn't easy. Lots of code gets rewritten and refactored because it didn't go right the first time.
If you're writing software to solve problems you've never tackled before, you will end up rewriting parts at some point, no matter how well you designed it.
Mostly I think you just have to get better at it, like with all things.
No see, that's the thing. I can't even get to the part where it needs to be refactored, nor can I train myself to think like that. I can think in pseudocode, but I can't put two and two together and put my thoughts into code. There is no training or refactoring, I just stare at the open file mindlessly.
I can follow tutorials online and make perfect sense of what they're saying.
I can sit down and read someone else's code base and (to some extent) make sense of what they did and why.
But I cannot, for the fucking life of me, write original code beyond a basic level. And I don't want to participate in something (as a hobby) if all I'm going to do is plagiarize tutorials and others' code.
That's what kills me the most: I get the underpinnings of it, it's just putting thoughts to code that shuts my brain down faster than an old person seeing Windows 8.
You are expecting too much of your code, you want it to be perfect so you think hard about -what- you should write to bring your idea to life, but you are not actually thinking, you are thinking about thinking.
You can understand but you are not staying mentally involved, you are not taking the mental initiative, you are not maintaining the offensive mentality.
You just need to start writing shit; when I start most of my projects I just start hammin out bullshit code but you see it is this initial process that allows me the crucial opening I need to maintain concentration.
Plan it out, but for the love of god just get in there.
A concrete example:
I was trying to make a program that prints a triangle of a specified size on the command line, I was using C, so I just sat down,#include <stdio>and int main and all that shit then I just wrote the first shit that entered my mind, which was a for loop, and that little illusion of progress, the simple act of typing the damned loop, was enough to keep me going.
Please help me get started on making Android apps/games. Are there any tutorials/resources you (or the internet sources you've read) trust and can recommend to start making apps quickly? I have Java/C 101 knowledge. Thank you in advance.
Yes it is for indie devs. It's easy to use, lots of documentation and examples online. It's fast too, except for the shitty garbage collection trying to free unused memory, but that an be avoided by simply pooling objects.
Developing a mario style game for Android in C#, good FPS all the time from 40-60, and I have a low/med-end mobile (Xperia Neo L), so that would be 60FPS typically on an average/high end Android mob
Image: Left sidebar, all my scripts so far
No he proved it's impossible in 100% of cases. i.e. a program can't improve itself from every possible starting state. Limited self-improvement has already been programmed, look at how Windows 8 makes itself more efficient over time.
Iterative complete self improvement is fucking hard, and making it only improve in desirable ways has never been proven to be possible.
Since when does Android support C#?
They implemented a bridge between Java and C(++) which allows you to run compiled C++ from a Java program (with massive overhead while starting)
Obviously adequate programming skills are needed.
But even with average programming skills in either JAVA or C#, your gonna get faster performance using C#.
I thought this common knowledge amongst /g/
Don't you lot spout JAVA is slow and shit 24/7, hence clearly C# the proprietary M$ owned language is gonna be more powerful.
Depends on your criterion for improvement.
There are a lot of research programs that generate populations of programs and evolve them to do some useful thing (e.g. predict some process).
There are even programs that generate generators of programs and evolve them:
You van read more at http://www.idsia.ch/~juergen/geneticprogramming.html
>self-improving, metalearning GP approach that recursively applies metalevel GP (first introduced here) to the task of finding better program- modifying programs on lower levels. The goal was to use GP for improving GP
Also note that while this methods can generate interesting results they are just variants of random search.
>>42376862 It depends on criterion for improvement. If we don't require our program to generate provably more optimal programs then suboptimal candidates and GP approach are OK.
There is a common pattern in design of games: there is an abstract representation of game's objects (your hero has a x and y coordinates for example), the code that advances these object's state in time (e.g. moves your hero with constant speed which means just add some number to x and y coordinates) and then there is an actual code that draws game's objects on the screen.
You should understand how to represent every object in your game with numbers if you want to create one.
For the life of me, I cannot get this bumblefuck of dependency errors to build: https://github.com/archrival/Subsonic-Android All I want to do is hack in some proper self-signed certificate validation. If anyone could point me in the right direction, I'd appreciate it
>but actually putting things together the way I want them?
Maybe you should think more about entities of the real world that are represented in your program and types of operations that your code has to do on their representations?
What is the best book to learn how to think critically to develop problem solving skills in programming. For example, I have a programming exercise and I need to break it down and figure out what it needs. Which is the best book for method Abstraction and Stepwise Refinement
>look at how Windows 8 makes itself more efficient over time.
First, read wiki entries on the language you are going to learn, then go to http://www.codecademy.com/tracks/python (or other online learning service) and read your language's docs https://www.python.org/doc/
For C++ software, I learned from 'Modern C++ Design' and 'Effective C++'. I think they cover what you are looking for, however, they are not for beginners. Do da google search if interdasted
how on earth was X programmed? i understand the use of XCB/X11 libraries to handle X protocol in order to tell it what to draw to the screen, but how is X itself written so that it draws it to the screen?
can X in the same way be replaced by another software that were written to do the same thing?
See this: http://www.cs.rit.edu/~ats/books/ooc.pdf
X.org is written in C.
>can X in the same way be replaced by another software that were written to do the same thing?
If you mean X.org, there's a few implementations of the X11 protocol.
>can X in the same way be replaced by another software that were written to do the same thing?
Of course, there are other more direct APIs that allow to draw on screen. E.g. DirectFB.
What's the best way to:
- Store an unknown amount of integers
- Store another unknown amount of integers (mostly the same numbers)
- Compare both and get the numbers that were in the first one but aren't in the second one
- Repeat 1000x per second with 0% cpu load
Its for selecting files with a selection rectangle
I don't use full row selection so the integers are not like 0, 1, 2, 3... and more like 1, 7, 9, 18...