People don't understand the biggest problem of ML: Deferral of responsibility. >Sorry, the AI concluded you were not eligible for a credit, sorry, nothing we can do, computer said so >Sorry, the AI determined that your driving style is dangerous, car is now locked and can't be started, nothing we can do, computer said so >Sorry, we had an AI-supported imaging component in our photocopiers, the business-critical invoices and balance sheets all have wrong numbers after copying, going back 5 years. Your business is now in violation of IRS and SEC statues and will be foreclosed, nothing we can do, computer said so >Sorry, you receive a death sentence by lethal injection, because the AI determined with 99.99999461251 confidence that you look like the serial killer this one CCTV camera recorded, nothing we can do, computer said so
>dis ai be raysis yo, i wus just goin to church n helpin da homeless ppls. imma go to da medias n shiet
Sorry sir, please have these $50 million USD and gold plated UZI.
Which would be fine (still more logical than bureaucracy), if it wasn't legal let alone possible for onions valley gays like op picrel to fuck with their models like that as they see fit.
This is literally how the government already works. >Sorry we can't help you, this piece of paper says that you owe a billion dollars in speeding fines despite you having never having owned or driven a car. Pay up or die.
That's already how it works. All those industries already use automated software INCLUDING ML-based software in their stack, especially insurance, and already make decisions this way. While judgement isn't computer-based yet, resource allocations is already handled by software using ML that tell cops to go to so at-risk area or to ignore so safer area.
Cant you hire a human to verify the results ? Even if it has some misguided judgement, its still getting the majority of cases correct, so the person checking the results wouldnt have to correct much.
Not trial and error. Statistics. It's doing a kind of curve fitting, except it's a hypersurface in a very high dimensional configuration space. The amazing thing is that it works at all, and often fairly well.
Still, it's only as good as your training data (at best). AI is money laundering for prejudice.
>The amazing thing is that it works at all
It's not surprising to anyone but CStards who didn't pay attention in their undergrad maffs classes.
The whole meme learning reneissance is basically CStards rediscovering undergrad maffs.
Why do people on BOT have to keep this aloof "too good for this" reductionist attitude towards anything and everything? >ughhh why are you excited this is all just a buncha preschooler bullshit d-.-b
It's because of retards who think ML is some kind of godsend. Tech gurus especially like to propagate it while not knowing shit about it and even worse are those singularity tards.
10 months ago
Anonymous
Ai is talked about like climate change.
They toss around buzz words and repeat some fact they heard, but have no understanding of any of it.
10 months ago
Anonymous
Cont.
Here are the areas ML is actually useful: > Anything image recognition and Computer Vision-rleated >Translation and natural language processing >Advanced recommender systems >In more general terms, anything pattern recognition-related
That's literally it. Anything else is either a complete meme or in it's infancy.
10 months ago
Anonymous
Just say solving problems with non-linear constraints, krugergay.
10 months ago
Anonymous
Nope. The vast majority of these problems have too many local max/mins that any ai won't converge to the global max/min.
10 months ago
Anonymous
>ai
optimizer
10 months ago
Anonymous
Machine learning.
10 months ago
Anonymous
People have investigated 2nd order optimizers for ages for even NNs, and ML in general can use 4th order optimizers. Those don't get stuck in local minima, problem is they're too slow. Algos like ADAM effectively accelerate in the direction of the 2nd order diff, which approximate 2nd order methods and helps escape local minima, which is why they work so well.
"ML" does not prescribe any specific optimizer. MCMC, EM, sampling have completely different problems than gradient-based optimizers for example, but they're still valid optimizers actively used in ML.
Similarly, linear models have a single minimum that is always reached by gradient-based methods, but are still a kind of ML.
Neither the optimizer nor the structure of the problem is a feature of ML.
10 months ago
Anonymous
>>In more general terms, anything pattern recognition-related
The number of times I see a bean-counter (or vaguely technical "consultant") try to justify and shoehorn their particular business process into a pattern matching problem, solely to justify building something with "machine learning" in the name, is way too fucking damn high.
So many computer systems really are just basic CRUD shit, because so many business problems are >monkey on typewriter data-entry >check if the data entered is correct, apply further calculations if it is >when someone asks for the data later, based on some arbitrary filter, return the result you calculated
Meanwhile projects take longer because of bullshit meetings to talk bean-counters out of making the whole thing a chain of buzzwords that do nothing.
10 months ago
Anonymous
This entire post exactly sums up the 300+% increase in investment my company is putting towards our tech departments purely for muh ML
10 months ago
Anonymous
>into a pattern matching problem,
Anon, pattern recognition is not the same as pattern matching. If you actually have a pattern recognition problem it's fairly difficult to get off just using basic algos.
But yes I know what you mean regarding the rest of the post
10 months ago
Anonymous
AI is more than just ML
10 months ago
Anonymous
For all intents and purposes, no.
10 months ago
Anonymous
You say that but practically, what else is being bandwagoned by every medium to large-sized company related to AI?
What is there that I'm missing?
Yes must of what is being talked about is ML. But rather than saying AI is just statistics, I think it would be better to correct people and say that ML is just statistics.
AI is something far greater. There's no reason some day we won't be able to create neurons and arrange them in a way that is functionally a working, intelligent brain. this fits the definition of artificial intelligence. right now we are trying to do it with transistors, and I believe it is possible to achieve AI this way as well.
What makes your own brain anything more than just statistics and pre-programming? I think it is safe to say your own brain is more than that, though it is hard to put into words. Same can be said about a cats brain. I think it is safe to say even an ant possesses intelligence.
Theres more to intelligence than just learning. theres critical thinking. theres recognizing mistakes. There is almost universally a desire for self preservation and reproduction, and with humans things like honor or gratification. So, I dont think its fair to say that AI is just ML. learning is certainly a key part and the one most discussed, but intelligence is more than that
10 months ago
Anonymous
That's because you mean AGI. When people say AI and aren't an msm memespeaker, they mean "non-trivial automation".
10 months ago
Anonymous
>AI is something far greater. >2 more paragraphs with abbreviation typos and rhetorical questions, and fuck all for practical usecases.
Get off of your goddamn soapbox and actually code some shit please. You're waxing poetic like you're writing the next great novella when most of programming is just making Excel dumbass proof and caching porn and cat videos at CDNs so some chinesium shitbox can play them back at 720p.
10 months ago
Anonymous
I dont see a single typo. Sure, I omit apostrophes in contractions and dont capitalize all my sentences. The only abbreviations I used were the same ones the person I was responding to used first.
I wasnt trying to be poetic. How the fuck do you try to differentiate intelligence and learning to someone who seems to think theyre the same thing? I dont give a shit about demonstrating practical use cases. AI isnt just machine learning. I dont care what you do with that information other than acknowledge that it is a true statement.
10 months ago
Anonymous
>I dont see a single typo. >Sure, I omit apostrophes in contractions and dont capitalize all my sentences.
anon, I.... >I dont give a shit about demonstrating practical use cases
Then your AI shit is useless, except for ML, where's there's actually a practical usecase for doing something with it. Even a jacquard loom can make a useful product, which is more than you can say for generalized AI research and academic masturbation.
10 months ago
Anonymous
>I....
What did you mean by this?
10 months ago
Anonymous
I really don't like arguing over such petty shit but since I have autism I am going to anyway...
A typo is a mistake. What I am doing is being lazy, because this is BOT, not a formal email or forum or something.
Anyway, I just disagree with "AI" being called "just statistics" because I believe AI is more than that, but I am OK with machine learning being called "just statistics". Apparently there is a term AGI (artificial general intelligence) which is more defined as what I am referring to as AI.
I would imagine things like good chat bots or self driving cars could be considered more than just statistics and therefore more AI than ML. I suppose another aspect of AI could be the real-time aspect of it, vs machine learning, which doesnt need to be in real-time. Yes you can call these arbitrary but its all I can come up with in real time since you are putting me on the spot and forcing me to justify my belief that AI and ML are definitely distinct.
10 months ago
Anonymous
>AI isnt just machine learning. I dont care what you do with that information other than acknowledge that it is a true statement.
Anyway, I just disagree with "AI" being called "just statistics" because I believe AI is more than that, but I am OK with machine learning being called "just statistics". Apparently there is a term AGI (artificial general intelligence) which is more defined as what I am referring to as AI.
I would imagine things like good chat bots or self driving cars could be considered more than just statistics and therefore more AI than ML. I suppose another aspect of AI could be the real-time aspect of it, vs machine learning, which doesnt need to be in real-time. Yes you can call these arbitrary but its all I can come up with in real time since you are putting me on the spot and forcing me to justify my belief that AI and ML are definitely distinct.
>since you are putting me on the spot and forcing me to justify my belief that AI and ML are definitely distinct.
The non-ML usages of AI either don't exist or are purely academic in nature. You can treat them as distinct, but if AI is separated from ML it has not practical industry purpose.
10 months ago
Anonymous
>The non-ML usages of AI either don't exist or are purely academic in nature.
Videogame AI?
10 months ago
Anonymous
Fancy chains of if statements, with Dijkstra's algorithm somewhere in the mix for pathfinding.
Next you'll tell me you consider a chess engine AI.
10 months ago
Anonymous
Just because it's primitive doesn't mean it's not AI.
Contarians.
But also ai is a big meme and misleading.
The Algorithms are neat and all but it's not creating a general intelligence no matter how it has been marketed.
10 months ago
Anonymous
most AI people who are not charlatans say general intelligence is a long way away and most of the work they do has nothing to do with it
10 months ago
Anonymous
>ai people
A lot of them are the cs equivalent of car salesmen.
[...] >Suddenly, everyone in CS is rediscovering maffs
Because they didn't have enough computing power back then. >and pretending it's some revolution they invented.
A lot of work goes into optimizing ML algorithms for specific tasks. Transformer models were conceived just a few years ago.
I feel like every time you accuse someone of thinking they're smarter than they actually are you're just projecting yourself.
People that have some clue don't say that. However, due to it ai being marketed as this amazing magical intellligence, average joe has the wrong idea and won't listen to anyone that has a cool. ""Experts"" on their screen box tell them things and they believe them rather than the experts they encounter irl.
10 months ago
Anonymous
>>ai people >A lot of them are the cs equivalent of car salesmen.
There's AI developers, and AI salesmen. They are different.
10 months ago
Anonymous
>It's because of retards who think ML is some kind of godsend.
It allows us plenty of new and interesting possibilities >it's not creating a general intelligence no matter how it has been marketed.
I don't know where you're taking this from but most people dont walk around saying that general intelligence is right around the corner.
>Suddenly, everyone in CS is rediscovering maffs
Because they didn't have enough computing power back then. >and pretending it's some revolution they invented.
A lot of work goes into optimizing ML algorithms for specific tasks. Transformer models were conceived just a few years ago.
I feel like every time you accuse someone of thinking they're smarter than they actually are you're just projecting yourself.
10 months ago
Anonymous
It's misleading to say that transformers were conceived recently. Using fully-connected networks for everything was the default state including in language. The lack of computing power means we had to develop more clever models, like RNNs such as LSTMs and then GRUs, or convnets. Before transformers, the popular approach was using convnets with self-attention with as large a receptive field as possible, aka it was just a transformer but more memory-efficient due to not having enough vram to put the whole thing in yet.
The only advantage of transformers is that they're fast ,and our hardware, so that you can train them on massive amounts of data in the same time it takes better models to be trained on less data.
When attention is all you need was first published, even they noted that on their benchmark, their model does barely better than methods from 2006 (McClosky et al. Effective self-training for parsing) despite using far more data for pretraining, so model superiority wasn't exactly the contribution.
>ughhh why are you excited
Yes, why?
The theory has been sitting unnoticed for decades (even centuries because Euler was alien). Suddenly, everyone in CS is rediscovering maffs and pretending it's some revolution they invented.
This is you https://doi.org/10.2337/diacare.17.2.152
>"Maff" predicts that it shouldn't work
Shoddy intuition of applied mathgays might. But plot the loss surface of your net and tell me again it's surprising that "it works". If you are after why the loss surface tends to look as it does, try rediscovering diffy geo next. >Mafftards have been trying to explain it for ages
There is no mathemagician interested in meme learning. It's all applied mathgays or CStards, rarely physishits.
The closest you'll see a mathemagician to anything you CStards use is compressed sensing.
[...] >Suddenly, everyone in CS is rediscovering maffs
Because they didn't have enough computing power back then. >and pretending it's some revolution they invented.
A lot of work goes into optimizing ML algorithms for specific tasks. Transformer models were conceived just a few years ago.
I feel like every time you accuse someone of thinking they're smarter than they actually are you're just projecting yourself.
>they didn't have enough computing power back then
Nothing was preventing you from studying the theory without being able to run your models. >Transformer models
Oh wow, acausal lstm, such revolution.
10 months ago
Anonymous
>The closest you'll see a mathemagician to anything you CStards use is compressed sensing. >Nothing was preventing you from studying the theory without being able to run your models.
LMAO
10 months ago
Anonymous
>applied
Wrong. Pure mathtards. >plot the loss surface
is precisely why it shouldn't work.
Thanks for proving without doubt that you are on the very left side, yet also at the very maximum point, of a dunning-kruger curve.
10 months ago
Anonymous
>Pure mathtards
"Pure", who were too braindead to make it in the field. Good catch. >is precisely why it shouldn't work
Wow this is impossibru to optimize! Moar layers = moar impossibru! One big saddle is impossible to find!
10 months ago
Anonymous
Pure who are profs of math in math departments working on math.
Nice cope though retardo. >literally can't even read level lines >even though that's a simple 2d representation so his feeble brain can comprehend >pretends to like math
Lol. Lmao.
>The whole meme learning reneissance is basically CStards rediscovering undergrad maffs.
This is so wrong it's not even funny. Absolute ignorant midwit take. The renaissance is due to having enough computational power to implement algorithms that would have been impossible to run in a reasonable amount of time on older hardware.
>The renaissance is due to having enough computational power to implement algorithms
Except we don't and will never, unless we have a massive cluster of so-called "analog computers"
That's literally it. Anything else is either a complete meme or in it's infancy.
>Reduced entropy in arbitrary systems is facilitated by gradient descent therefore it's bad.
lol >Recovery of implicit information manifolds in any information rich context, effectively defeating the curse of dimensionality on small scales, is facilitated by gradient descent therefore it's bad.
lmao
Stick in your own lane boomers, you literally understand nothing.
Saying that gradient descent disqualifies neural networks from being significant, if not the most significant technology in the last hundred years, are morons. Every single month a new paper is released that encroaches on the domain of "only humans can do this," and people like them, without missing a beat, hoist up the goalposts and say "well actually, it's these OTHER things that only humans can do, that other thing is easy" over and over.
It might not be there yet, but I think it will be soon. The only thing that could really one-up it on it's current trajectory would be some radical life extension therapies.
It's more accurate to say it's just statistical modeling. It's all just frequencies and transformations on them, and the core is linear algebra because it allows you sufficient power that you're basically able to brute force most of the viable solution space.
The thing is, it works. Who cares if it merely approximate what you're looking for? It fucking works. And takes no effort to boot.
>if you gave a human 100 photos 50% gorilla 50% black person shuffled randomly they would get 0 incorrect
Same if you give it to an ML algorithm. The problem is that you also give the algorithm hundreds of millions of non-gorilla non-nig photos, and it turns out that nigs look more like gorillas than they do humans unironically. You can train the model naturally until it successfully makes the difference, but doing so results in overfitting that makes the whole thing useless. This is really just a lack-of-data problem due to the distance between classes. Hence you would need to have as many faces of nigs and gorillas as you have all other images combined to have it make the right prediction.
We don't need general intelligence.
We need spam filters, customer support, mods for internet communities, porn filters, etc...
And we havent gotten any of that in over a decade.
I mean theres chatbots thst can hold a conversation and stuff and well now theres dalle
I would think making a customer support bot or a mod bot would be easier.
Like train it with rigid rules and make it do whst its supposed to do. I think theyre already making the porn filter one though.
There are really good legitimate uses of Neural Nets. Not just Speech and Image recognition, but also Audio and Visual upscaling and filtering. The entire gambit of vague unfalsifiable interpretations of images and sound.
Non-linearities are required to prevent the whole network from collapsing into a single linear system. Activation functions are what separate machine learning from linear algebra and what make them able to perform non-linear classification; you don't have universal approximation with just matrices.
The only bullshit part is that it's not AI , there's 0 intelligence in machine learning it's just a fancy optimization algorithm. The machine itself has no idea what it's doing it crunches numbers and gives some outputs with a probability.
You can be reductive about literally anything in STEM. Modern computing is just boolean logic, which is even more primitive than linear algebra.
But this doesn't really give you perspective of complexity of the system. Modern computing is just boolean but x10^9. Which gives you a huge set of functions that power our modern day world.
From the OP >linear algebra on crack
Regression with a lot of computing, yields some useful functions. More known possibilities exist and will continue to be explored.
A lot of this effort will yield dead ends, because of a bullish market. But the more understanding of the industry and NN systems, the more marketable capabilities we will derive.
They say machine learning but don't mean machine learning, or artificial intelligence but don't mean artificial intelligence like when they say diversity but don't mean diversity. It's the same people, everything they say is a lie.
People don't understand the biggest problem of ML: Deferral of responsibility.
>Sorry, the AI concluded you were not eligible for a credit, sorry, nothing we can do, computer said so
>Sorry, the AI determined that your driving style is dangerous, car is now locked and can't be started, nothing we can do, computer said so
>Sorry, we had an AI-supported imaging component in our photocopiers, the business-critical invoices and balance sheets all have wrong numbers after copying, going back 5 years. Your business is now in violation of IRS and SEC statues and will be foreclosed, nothing we can do, computer said so
>Sorry, you receive a death sentence by lethal injection, because the AI determined with 99.99999461251 confidence that you look like the serial killer this one CCTV camera recorded, nothing we can do, computer said so
Sounds like its replicating the existing government bureaucracy. Nothing lost, nothing gained.
>dis ai be raysis yo, i wus just goin to church n helpin da homeless ppls. imma go to da medias n shiet
Sorry sir, please have these $50 million USD and gold plated UZI.
Which would be fine (still more logical than bureaucracy), if it wasn't legal let alone possible for onions valley gays like op picrel to fuck with their models like that as they see fit.
>problem
This is literally how the government already works.
>Sorry we can't help you, this piece of paper says that you owe a billion dollars in speeding fines despite you having never having owned or driven a car. Pay up or die.
>problem
but that's why they do it
it's literally a feature
That's already how it works. All those industries already use automated software INCLUDING ML-based software in their stack, especially insurance, and already make decisions this way. While judgement isn't computer-based yet, resource allocations is already handled by software using ML that tell cops to go to so at-risk area or to ignore so safer area.
Cant you hire a human to verify the results ? Even if it has some misguided judgement, its still getting the majority of cases correct, so the person checking the results wouldnt have to correct much.
This is why in cases you described you don't use machine learning model but expert system capable of understanding it's decision making process
black people just look like apes
sorry but it is the truth
what a racist amerilard take
as if there are no morons that arent "african american"
It's not bullshit, but it is just statistics on steroids.
It's trial and error, on steroids.
Not trial and error. Statistics. It's doing a kind of curve fitting, except it's a hypersurface in a very high dimensional configuration space. The amazing thing is that it works at all, and often fairly well.
Still, it's only as good as your training data (at best). AI is money laundering for prejudice.
>The amazing thing is that it works at all
It's not surprising to anyone but CStards who didn't pay attention in their undergrad maffs classes.
The whole meme learning reneissance is basically CStards rediscovering undergrad maffs.
Why do people on BOT have to keep this aloof "too good for this" reductionist attitude towards anything and everything?
>ughhh why are you excited this is all just a buncha preschooler bullshit d-.-b
It's because of retards who think ML is some kind of godsend. Tech gurus especially like to propagate it while not knowing shit about it and even worse are those singularity tards.
Ai is talked about like climate change.
They toss around buzz words and repeat some fact they heard, but have no understanding of any of it.
Cont.
Here are the areas ML is actually useful:
> Anything image recognition and Computer Vision-rleated
>Translation and natural language processing
>Advanced recommender systems
>In more general terms, anything pattern recognition-related
That's literally it. Anything else is either a complete meme or in it's infancy.
Just say solving problems with non-linear constraints, krugergay.
Nope. The vast majority of these problems have too many local max/mins that any ai won't converge to the global max/min.
>ai
optimizer
Machine learning.
People have investigated 2nd order optimizers for ages for even NNs, and ML in general can use 4th order optimizers. Those don't get stuck in local minima, problem is they're too slow. Algos like ADAM effectively accelerate in the direction of the 2nd order diff, which approximate 2nd order methods and helps escape local minima, which is why they work so well.
"ML" does not prescribe any specific optimizer. MCMC, EM, sampling have completely different problems than gradient-based optimizers for example, but they're still valid optimizers actively used in ML.
Similarly, linear models have a single minimum that is always reached by gradient-based methods, but are still a kind of ML.
Neither the optimizer nor the structure of the problem is a feature of ML.
>>In more general terms, anything pattern recognition-related
The number of times I see a bean-counter (or vaguely technical "consultant") try to justify and shoehorn their particular business process into a pattern matching problem, solely to justify building something with "machine learning" in the name, is way too fucking damn high.
So many computer systems really are just basic CRUD shit, because so many business problems are
>monkey on typewriter data-entry
>check if the data entered is correct, apply further calculations if it is
>when someone asks for the data later, based on some arbitrary filter, return the result you calculated
Meanwhile projects take longer because of bullshit meetings to talk bean-counters out of making the whole thing a chain of buzzwords that do nothing.
This entire post exactly sums up the 300+% increase in investment my company is putting towards our tech departments purely for muh ML
>into a pattern matching problem,
Anon, pattern recognition is not the same as pattern matching. If you actually have a pattern recognition problem it's fairly difficult to get off just using basic algos.
But yes I know what you mean regarding the rest of the post
AI is more than just ML
For all intents and purposes, no.
You say that but practically, what else is being bandwagoned by every medium to large-sized company related to AI?
What is there that I'm missing?
Yes must of what is being talked about is ML. But rather than saying AI is just statistics, I think it would be better to correct people and say that ML is just statistics.
AI is something far greater. There's no reason some day we won't be able to create neurons and arrange them in a way that is functionally a working, intelligent brain. this fits the definition of artificial intelligence. right now we are trying to do it with transistors, and I believe it is possible to achieve AI this way as well.
What makes your own brain anything more than just statistics and pre-programming? I think it is safe to say your own brain is more than that, though it is hard to put into words. Same can be said about a cats brain. I think it is safe to say even an ant possesses intelligence.
Theres more to intelligence than just learning. theres critical thinking. theres recognizing mistakes. There is almost universally a desire for self preservation and reproduction, and with humans things like honor or gratification. So, I dont think its fair to say that AI is just ML. learning is certainly a key part and the one most discussed, but intelligence is more than that
That's because you mean AGI. When people say AI and aren't an msm memespeaker, they mean "non-trivial automation".
>AI is something far greater.
>2 more paragraphs with abbreviation typos and rhetorical questions, and fuck all for practical usecases.
Get off of your goddamn soapbox and actually code some shit please. You're waxing poetic like you're writing the next great novella when most of programming is just making Excel dumbass proof and caching porn and cat videos at CDNs so some chinesium shitbox can play them back at 720p.
I dont see a single typo. Sure, I omit apostrophes in contractions and dont capitalize all my sentences. The only abbreviations I used were the same ones the person I was responding to used first.
I wasnt trying to be poetic. How the fuck do you try to differentiate intelligence and learning to someone who seems to think theyre the same thing? I dont give a shit about demonstrating practical use cases. AI isnt just machine learning. I dont care what you do with that information other than acknowledge that it is a true statement.
>I dont see a single typo.
>Sure, I omit apostrophes in contractions and dont capitalize all my sentences.
anon, I....
>I dont give a shit about demonstrating practical use cases
Then your AI shit is useless, except for ML, where's there's actually a practical usecase for doing something with it. Even a jacquard loom can make a useful product, which is more than you can say for generalized AI research and academic masturbation.
>I....
What did you mean by this?
I really don't like arguing over such petty shit but since I have autism I am going to anyway...
A typo is a mistake. What I am doing is being lazy, because this is BOT, not a formal email or forum or something.
Anyway, I just disagree with "AI" being called "just statistics" because I believe AI is more than that, but I am OK with machine learning being called "just statistics". Apparently there is a term AGI (artificial general intelligence) which is more defined as what I am referring to as AI.
I would imagine things like good chat bots or self driving cars could be considered more than just statistics and therefore more AI than ML. I suppose another aspect of AI could be the real-time aspect of it, vs machine learning, which doesnt need to be in real-time. Yes you can call these arbitrary but its all I can come up with in real time since you are putting me on the spot and forcing me to justify my belief that AI and ML are definitely distinct.
>AI isnt just machine learning. I dont care what you do with that information other than acknowledge that it is a true statement.
>since you are putting me on the spot and forcing me to justify my belief that AI and ML are definitely distinct.
The non-ML usages of AI either don't exist or are purely academic in nature. You can treat them as distinct, but if AI is separated from ML it has not practical industry purpose.
>The non-ML usages of AI either don't exist or are purely academic in nature.
Videogame AI?
Fancy chains of if statements, with Dijkstra's algorithm somewhere in the mix for pathfinding.
Next you'll tell me you consider a chess engine AI.
Just because it's primitive doesn't mean it's not AI.
Contarians.
But also ai is a big meme and misleading.
The Algorithms are neat and all but it's not creating a general intelligence no matter how it has been marketed.
most AI people who are not charlatans say general intelligence is a long way away and most of the work they do has nothing to do with it
>ai people
A lot of them are the cs equivalent of car salesmen.
People that have some clue don't say that. However, due to it ai being marketed as this amazing magical intellligence, average joe has the wrong idea and won't listen to anyone that has a cool. ""Experts"" on their screen box tell them things and they believe them rather than the experts they encounter irl.
>>ai people
>A lot of them are the cs equivalent of car salesmen.
There's AI developers, and AI salesmen. They are different.
>It's because of retards who think ML is some kind of godsend.
It allows us plenty of new and interesting possibilities
>it's not creating a general intelligence no matter how it has been marketed.
I don't know where you're taking this from but most people dont walk around saying that general intelligence is right around the corner.
>Suddenly, everyone in CS is rediscovering maffs
Because they didn't have enough computing power back then.
>and pretending it's some revolution they invented.
A lot of work goes into optimizing ML algorithms for specific tasks. Transformer models were conceived just a few years ago.
I feel like every time you accuse someone of thinking they're smarter than they actually are you're just projecting yourself.
It's misleading to say that transformers were conceived recently. Using fully-connected networks for everything was the default state including in language. The lack of computing power means we had to develop more clever models, like RNNs such as LSTMs and then GRUs, or convnets. Before transformers, the popular approach was using convnets with self-attention with as large a receptive field as possible, aka it was just a transformer but more memory-efficient due to not having enough vram to put the whole thing in yet.
The only advantage of transformers is that they're fast ,and our hardware, so that you can train them on massive amounts of data in the same time it takes better models to be trained on less data.
When attention is all you need was first published, even they noted that on their benchmark, their model does barely better than methods from 2006 (McClosky et al. Effective self-training for parsing) despite using far more data for pretraining, so model superiority wasn't exactly the contribution.
>ughhh why are you excited
Yes, why?
The theory has been sitting unnoticed for decades (even centuries because Euler was alien). Suddenly, everyone in CS is rediscovering maffs and pretending it's some revolution they invented.
This is you https://doi.org/10.2337/diacare.17.2.152
Compensation for lack of pussy due to severe societal rejection.
Not rocket science.
"Maff" predicts that it shouldn't work. Mafftards have been trying to explain it for ages and are getting nowhere.
>"Maff" predicts that it shouldn't work
Shoddy intuition of applied mathgays might. But plot the loss surface of your net and tell me again it's surprising that "it works". If you are after why the loss surface tends to look as it does, try rediscovering diffy geo next.
>Mafftards have been trying to explain it for ages
There is no mathemagician interested in meme learning. It's all applied mathgays or CStards, rarely physishits.
The closest you'll see a mathemagician to anything you CStards use is compressed sensing.
>they didn't have enough computing power back then
Nothing was preventing you from studying the theory without being able to run your models.
>Transformer models
Oh wow, acausal lstm, such revolution.
>The closest you'll see a mathemagician to anything you CStards use is compressed sensing.
>Nothing was preventing you from studying the theory without being able to run your models.
LMAO
>applied
Wrong. Pure mathtards.
>plot the loss surface
is precisely why it shouldn't work.
Thanks for proving without doubt that you are on the very left side, yet also at the very maximum point, of a dunning-kruger curve.
>Pure mathtards
"Pure", who were too braindead to make it in the field. Good catch.
>is precisely why it shouldn't work
Wow this is impossibru to optimize! Moar layers = moar impossibru! One big saddle is impossible to find!
Pure who are profs of math in math departments working on math.
Nice cope though retardo.
>literally can't even read level lines
>even though that's a simple 2d representation so his feeble brain can comprehend
>pretends to like math
Lol. Lmao.
>The whole meme learning reneissance is basically CStards rediscovering undergrad maffs.
This is so wrong it's not even funny. Absolute ignorant midwit take. The renaissance is due to having enough computational power to implement algorithms that would have been impossible to run in a reasonable amount of time on older hardware.
>The renaissance is due to having enough computational power to implement algorithms
Except we don't and will never, unless we have a massive cluster of so-called "analog computers"
Trial-and-error and statistics are exactly the same
>Is Machine Learning a bunch of bullshit?
Its too based in fact, being based on facts and logic.
Well according to the video made on YouTube about the BOT bot.
Apparently a bot trained on BOT wins in truthfulQA
As a concept, no. The current industry standard? Maybe.
many people had made that classification 'mistake' too
if software cant rewrite its own code, its not AI
absolutely not, it's the frameworks that humans create to satisfy their need for the answer they're looking for that's a bunch of bullshit
Gif related is literally 99% of modern ML applications
Pic related is literally 99% of moden maths
Yes, If you've just passed 8th grade maybe
>arithmetic = math
I fucking hate americans.
You're missing the point
>Formalizations = Math
>Government = God
I fucking hate Eurogays.
This missing comparisons
They are implemented with arithmetic operations
>Reduced entropy in arbitrary systems is facilitated by gradient descent therefore it's bad.
lol
>Recovery of implicit information manifolds in any information rich context, effectively defeating the curse of dimensionality on small scales, is facilitated by gradient descent therefore it's bad.
lmao
Stick in your own lane boomers, you literally understand nothing.
Nobody sad it's bad you absolute mongoloid, read again.
Saying that gradient descent disqualifies neural networks from being significant, if not the most significant technology in the last hundred years, are morons. Every single month a new paper is released that encroaches on the domain of "only humans can do this," and people like them, without missing a beat, hoist up the goalposts and say "well actually, it's these OTHER things that only humans can do, that other thing is easy" over and over.
>if not the most significant technology in the last hundred years
Hyperbole much? You're up against some other really major inventions there.
It might not be there yet, but I think it will be soon. The only thing that could really one-up it on it's current trajectory would be some radical life extension therapies.
It's more accurate to say it's just statistical modeling. It's all just frequencies and transformations on them, and the core is linear algebra because it allows you sufficient power that you're basically able to brute force most of the viable solution space.
The thing is, it works. Who cares if it merely approximate what you're looking for? It fucking works. And takes no effort to boot.
it's kind of interesting that AI can't get black people's faces correct
if you gave a human 100 photos 50% gorilla 50% black person shuffled randomly they would get 0 incorrect
obviously we've been trained for millions of years to recognise human faces but it's still interesting to me
>if you gave a human 100 photos 50% gorilla 50% black person shuffled randomly they would get 0 incorrect
Same if you give it to an ML algorithm. The problem is that you also give the algorithm hundreds of millions of non-gorilla non-nig photos, and it turns out that nigs look more like gorillas than they do humans unironically. You can train the model naturally until it successfully makes the difference, but doing so results in overfitting that makes the whole thing useless. This is really just a lack-of-data problem due to the distance between classes. Hence you would need to have as many faces of nigs and gorillas as you have all other images combined to have it make the right prediction.
We don't need general intelligence.
We need spam filters, customer support, mods for internet communities, porn filters, etc...
And we havent gotten any of that in over a decade.
Customer support and mods for internet communities require general intelligence (it was proven mathematically).
I mean theres chatbots thst can hold a conversation and stuff and well now theres dalle
I would think making a customer support bot or a mod bot would be easier.
Like train it with rigid rules and make it do whst its supposed to do. I think theyre already making the porn filter one though.
>Is Machine Learning a bunch of bullshit?
No. Don't blame the observer for what it observes.
nope, AI is just too based
There are really good legitimate uses of Neural Nets. Not just Speech and Image recognition, but also Audio and Visual upscaling and filtering. The entire gambit of vague unfalsifiable interpretations of images and sound.
There's nothing linear about many of the functions applied in neural networks, so no, this guy doesn't know what he's talking about
they're all calculated using matrix multiplication, and matrices are core elements of linear algebra so
Non-linearities are required to prevent the whole network from collapsing into a single linear system. Activation functions are what separate machine learning from linear algebra and what make them able to perform non-linear classification; you don't have universal approximation with just matrices.
The only bullshit part is that it's not AI , there's 0 intelligence in machine learning it's just a fancy optimization algorithm. The machine itself has no idea what it's doing it crunches numbers and gives some outputs with a probability.
You can be reductive about literally anything in STEM. Modern computing is just boolean logic, which is even more primitive than linear algebra.
But this doesn't really give you perspective of complexity of the system. Modern computing is just boolean but x10^9. Which gives you a huge set of functions that power our modern day world.
From the OP
>linear algebra on crack
Regression with a lot of computing, yields some useful functions. More known possibilities exist and will continue to be explored.
A lot of this effort will yield dead ends, because of a bullish market. But the more understanding of the industry and NN systems, the more marketable capabilities we will derive.
it's a useful statistical technique
but most applications are you see today are bullshit
Is this the BOT vs BOT thread?
BOT is just 4chan + BOT now, everyone moved to other boards.
They say machine learning but don't mean machine learning, or artificial intelligence but don't mean artificial intelligence like when they say diversity but don't mean diversity. It's the same people, everything they say is a lie.