Which one do you prefer?
>>60769153
First is fast and easy to read
I like the third one.
First one is the most clear if you're unfamiliar with python
Second is what most pythonists fuck up their codebase with by turning everything into confusingly written one liner bullshit.
>>60769153
first
>>60769153
left.
>>60769153
First because it allows easy understanding and it's easier to implement in other languages.
First is O(N) the second is O(N^4) and the last one is O(N^5). The calls repeated calls to len() are probably optimized, but at plain sight each sum() and len() is a full iteration of the collection.
>>60769512
Can you check what time would it take for executing each solution?
>>60769512
first is O(n)
you're wrong on the second, len() is constant, so down to n^3
you're also wrong on the third, len() again so down to n^4, then because a number can only be either positive or negative, the first sum() will run n_pos iterations, and the second will run n - n_pos iterations, knocking down another order so we're at n^3
>>60769153
Obviously first. Keep it simple, stupid.
>>60769709
actually no, second is (n + n^2), or just (n^2)
#include <algorithm>
#include <numeric>
#include <vector>
std::vector<int> countPositivesSumNegatives(std::vector<int> input)
{
if (input.empty())
return std::vector<int>{ };
input.erase(std::remove(input.begin(), input.end(), 0), input.end());
auto it = std::remove_if(input.begin(), input.end(), [](const int &n) { return n < 0; });
int first = static_cast<int>(std::distance(input.begin(), it));
int second = std::accumulate(it, input.end(), 0);
return std::vector<int>{ first, second };
}
>>60769728
>>60769709
wow, i'm retarded, third is also just n^2
>>60769153
1st is len(arr) iterations
2nd/3rd is 4*len(arr) iterations
>>60769542
heres the total execution times for the 3 methods using an array with 1.000.000 integers:
first :4 function calls in 0.131 seconds
second:91012 function calls in 0.224 seconds
third :1000008 function calls in 0.314 seconds
>>60769512
>>60769709
>>60769853
That's not how runtime works.
The first is indeed O(n)
The second is O(n + n * n), or O(n^2)
The third is O(n + (n_neg * n)), simplified to O(n + n*n_neg), amortized to O(n^1.5) since we can assume p(negative) == p(positive) == 0.5
>>60770036
x for x in arr is O(N), done two times with different arguments it becomes O(N^2).
sum() is O(N), the first argument to sum() is different to the second one, so it has to run again, this is O(n_pos) + O(n_neg).
O(N^2 + N_POS + N_NEG), we could approximate the number of positives and negatives as N/2 each, giving O(N^2 + 2*N/2) simplified results in O(N^2).
Really should use a named tuple you fucking triple nigger
first >= third >>>>>>>>>>>>>>>>>>>>>>>> second
>>60769153
I had to use Code Wars recently. Every fucking answer is like this.
Disgustingly vague, one-lined pieces of code abominations.
Not to mention many of them have several chosen as "best practice", like that's a good message to send to aspiring programmers.
def count_positives_sum_negatives(arr):
# We use a complex algorithm to ensure that all numbers are absolute
return [0, len(arr)]
:^)
>>60769153
Third. It's easy to read and concise.
First is too babby and hurts my eyes.
Second is some one-liner bullshit.
>>60769153
as usual the "plain" solution is the best one. people are too obsessed with being idiomatic and stylish.
>>60770603
in the third?
no, done two times it becomes N + N, not N * N. it's not nested.
the sum in not called on the whole list, merely a portion of the list
My ideal solution would be to partition the array into two arrays in a single pass and then process them accordingly.
Here's an example in clojure, because fuck python.
(defn count-pos-sum-neg [arr]
(let [{pos true neg false} (group-by pos? arr)]
[(count pos) (reduce + neg)]))
>>60771191
>perform an N operation then two C/N operations instead of one N operation
idiot
>>60771551
It's the same asymptotic complexity and is trivially parallelizable, unlike a single pass with mutable state. You do realize it's not all about the pure count of constant time operations performed, right?
>>60771663
it's not the same complexity
i can tell you've read the wikipedia page for functional programming though
>>60771707
>Hurr durr what is asymptotic complexity
They are both linear time kiddo.
Shitty language. All of them require eyebleach.
I'd do it like thisnumPositivesSumNegatives xs = [pos, neg]
where pos = length $ filter (>0) xs
neg = sum $ filter (<0) xs
If performance was an issue i'd do the first option, nice and easy to ready.
ITT: /g/ thinks O(n + n) == O(n^2)
Python "developers" are sure dumb as fuck, as one expected.
>O(N^2)
>O(N^4)
It's so dumb I can't even laughing.
Don't they even know what Big O is about?
Best answer here, O(1):def count_positives_sum_negatives(arr):
raise WrongAbstractionError();
>>60769512
>>60769709
>>60769728
>>60770036
all three of them are O(N) you fucking retards.
2N is still N, 4N is still N.
First one iterates over the array only once, so is preferred. Arguably should just use an else clause instead of two separate if statements. If the number in the array is 0, neg will not change, since 0 is added.
>>60774068
>>60772073
These
>arr
It's called list you un-pythonic scum
def count_positives_sum_negatives(arr):
pos, neg = 0, 0
for x in arr:
pos += x > 0
neg += x if x < 0 else 0
return [pos, neg]
>>60769153
first or last, middle one is just clusterfuck on readability.
>>60774181
What if arr is a numpy.array ?
>>60769153
>Cнимoк.png
>>60769153
Third one is the clearest, first one is okay, second one is pretty bad
>>60770036
>The second is O(n + n * n)
Where the fuck are you getting the n*n from? There are 2 list comprehensions, each of which is completed in O(n) time. The result is then passed to either the len or sum function. Although sum() itself is computed in O(n) time, you aren't calling sum every time you do the list comprehension, so in effect, sum(list_comprehension) is O(2n), which is of course, just O(n).
All of these are O(n).
>>60774068
Thank you, I thought I was going insane because I couldn't see where the complexity they claim came from. It's all O(n) with varying constants. Ugh.