So deepmind is a natural science laboratory? not an Alphabet subsidiary?
Then again Alphafold and Alphago were also Nature publications (Nature is also in UK), seems like Deepmind has a connection to someone and prefers this journal.
It only took 30 scientists working with millions of dollars in funding to improve a single special case of sorting a few integers in a code base that is millions of lines of code and written entirely by volunteers, it is so fucking over
One time I actually tried to implement one of these bizarro academic algorithms into a library i was making. It was such a massive waste of time, the sample C implementation didnt compile, even once i fixed it it was still useless for any actual non trivial data set.
extremely likely to happen, because the inflated VC era salaries are the ones getting targeted for reduction, the low-medium paid midwits who tick a ESG diversity box are mostly likely to stay
yea because the algorithms are already done
you don't need to know them all, you just need to understand when to use which. The leet code is just a test whether you are willing to autistically code for hours on end
Accoriding to the nature.com article:
"AlphaDev operated at the level of assembly instructions: code generated by automated compilers from code that programmers write in C++"
"Depending on the processor used and the number of values to be sorted, AlphaDev’s best algorithms took between 4% and 71% less time than did human algorithms. But when the algorithms were called multiple times to sort lists of one -quarter of a million values, the cumulative time saving was only 1–2%, because of other code it did not optimize."
It's a start. What if I never get to invent new sorting algos in asm? The article has me really butt hurt. Like full on rectally ravaged.
Everybody wants to talk about this paper but nobody wants to read the damn thing. Do we at least know to which sorting algorithms the new ones were compared to?
Have you seen what that looks like in practice? Scaling something small goes up really fast and easily will bottleneck "fast" computers with shit code.
https://randomascii.wordpress.com/2019/12/08/on2-again-now-in-wmi/
Link the paper at least
nature url is spamlisted for some reason
https://news.ycombinator.com/item?id=36228125
Say it with me anons :
Published.
In.
Nature.
>nature
>nature supports the big winner
Based. Trump lost btw.
Not an IEEE or ACM journal? Seems a bit odd for a paper that allegedly improves upon fucking sorting...
Nature is the most prestigious journal in all of the natural sciences anon
So deepmind is a natural science laboratory? not an Alphabet subsidiary?
Then again Alphafold and Alphago were also Nature publications (Nature is also in UK), seems like Deepmind has a connection to someone and prefers this journal.
Interesting. Now I know you haven't read it yourself.
No time to waste anon, there's sudacas to make seethe today
It only took 30 scientists working with millions of dollars in funding to improve a single special case of sorting a few integers in a code base that is millions of lines of code and written entirely by volunteers, it is so fucking over
I'd be afraid too anon.
One time I actually tried to implement one of these bizarro academic algorithms into a library i was making. It was such a massive waste of time, the sample C implementation didnt compile, even once i fixed it it was still useless for any actual non trivial data set.
Interesting. Faster than what?
Show the algorithms.
It's a good thing being a software engineer isn't just solving a series of self-contained leetcode problems
>high IQ leet code spergs lose their jobs
>midwit web devs and java gays keep theirs
lmao
extremely likely to happen, because the inflated VC era salaries are the ones getting targeted for reduction, the low-medium paid midwits who tick a ESG diversity box are mostly likely to stay
yea because the algorithms are already done
you don't need to know them all, you just need to understand when to use which. The leet code is just a test whether you are willing to autistically code for hours on end
Accoriding to the nature.com article:
"AlphaDev operated at the level of assembly instructions: code generated by automated compilers from code that programmers write in C++"
"Depending on the processor used and the number of values to be sorted, AlphaDev’s best algorithms took between 4% and 71% less time than did human algorithms. But when the algorithms were called multiple times to sort lists of one -quarter of a million values, the cumulative time saving was only 1–2%, because of other code it did not optimize."
It's a start. What if I never get to invent new sorting algos in asm? The article has me really butt hurt. Like full on rectally ravaged.
It's at worst the same as current algorithms but already shows it can be much faster
It can only get better from here
So combine AI optimized code with human optimized code, then train on those optimized outputs
wow it can sort 5 numbers really fast
*yawn*
No one gets hired to code sorting algorithms except a couple of autists who work in niche fields.
Good thing the argument was about how "AI can't create anything valuable just entry level code", an argument you have now canonically and utterly lost
Honestly I rather pajeets just prompt the code and the AI actually write good code.
chud bot thread
I was thinking about this the other month actually. LLMs would be great for machine & bytecode optimization. Welp, here we are.
That's not what it's doing at all. It's more akin to taking three ints and then sorting them without using if-else
// Ensures that __c(*__x, *__y) is true by swapping *__x and *__y if necessary.
template <class _Compare, class _RandomAccessIterator>
inline _LIBCPP_HIDE_FROM_ABI void __cond_swap(_RandomAccessIterator __x, _RandomAccessIterator __y, _Compare __c) {
using value_type = typename iterator_traits<_RandomAccessIterator>::value_type;
bool __r = __c(*__x, *__y);
value_type __tmp = __r ? *__x : *__y;
*__y = __r ? *__y : *__x;
*__x = __tmp;
}
// Ensures that *__x, *__y and *__z are ordered according to the comparator __c,
// under the assumption that *__y and *__z are already ordered.
template <class _Compare, class _RandomAccessIterator>
inline _LIBCPP_HIDE_FROM_ABI void __partially_sorted_swap(_RandomAccessIterator __x, _RandomAccessIterator __y,
_RandomAccessIterator __z, _Compare __c) {
using value_type = typename iterator_traits<_RandomAccessIterator>::value_type;
bool __r = __c(*__z, *__x);
value_type __tmp = __r ? *__z : *__x;
*__z = __r ? *__x : *__z;
__r = __c(__tmp, *__y);
*__x = __r ? *__x : *__y;
*__y = __r ? *__y : __tmp;
}
really? damn
my lack of reading comprehension and susceptibility to making sweeping conclusion from skim reading internet comments fooled me again
The company that published the paper is owned by Google if that makes any difference.
Holy shit I love multiculturalism
>slavs
>multikulti
Everybody wants to talk about this paper but nobody wants to read the damn thing. Do we at least know to which sorting algorithms the new ones were compared to?
These.
https://en.wikipedia.org/wiki/Sorting_network
Who the fuck needs faster sorting that cannot be solved with more hardware?
Have you seen what that looks like in practice? Scaling something small goes up really fast and easily will bottleneck "fast" computers with shit code.
https://randomascii.wordpress.com/2019/12/08/on2-again-now-in-wmi/