why are they so irrelevant in AI research? even Canada mogs them

why are they so irrelevant in AI research? even Canada mogs them

ChatGPT Wizard Shirt $21.68

Beware Cat Shirt $21.68

ChatGPT Wizard Shirt $21.68

  1. 1 year ago
    Anonymous

    they don't speak english

    • 1 year ago
      Anonymous

      China doesn't either and they're a huge producer of AI research

      • 1 year ago
        Anonymous

        Yes, but nothing they produce is actually real. It's a mix of copy-pasta and completely-made-up crap.
        US is the most productive followed by canada (solely because of hinton and yoshua). Actually israel contributes quite a lot, especially for a country its size.

        • 1 year ago
          Anonymous

          Hello glowie
          Your wage keeping pace with the cost of living?

        • 1 year ago
          Anonymous
      • 1 year ago
        Anonymous

        Chinks speak better English than Japs.

      • 1 year ago
        Anonymous

        Life is already easy and comfortable enough without AI.
        To be honest why do they even need AI?
        They are not even pro censorship so AI is even less irrelevant.
        And they are not that interested in AI conferences and journal to boot
        Canada is China

        China want to do that because they feel the need to compete with the US.
        Most AI researches are related to nvidia, google, facebook, etc. And China have always wanted to compete with them (even in censorship)

        • 1 year ago
          Anonymous

          >Life is already easy and comfortable enough without AI.
          This is like saying “Life is comfortable enough with carriages, why do we need cars?

          • 1 year ago
            Anonymous

            >why do we need cars?
            Why indeed

          • 1 year ago
            Anonymous

            Cars were adopted only because horses were shitting all over the streets. Cars don’t shit, which was why they were welcome. But cars are still inferior in majority of cases compared to trains, especially as short distance travel (subway, trams) and as long distance travel. Many agree that most of the problems in the US are cause solely by just the invention of car (low social mobility ghettos, high homelessness, obesity, easy radicalisation because people outside are lonely ant can’t touch grass, over reliance on fossil fuel). Cars are pretty bad example for saying how the superior innovation eventually took over.

      • 1 year ago
        Anonymous

        CCP has thrown hundreds of billions at AI research. CCP is also shameless when it comes to espionage. every prestigious university conducting research in the AI space has chinese nationals on staff and a decent chunk of them are taking notes for the CCP.

        • 1 year ago
          Anonymous

          Most Chinese AI researchers can speak English and Chinese so they have an advantage in building bilingual language models (see GLM-130B).

          • 1 year ago
            Anonymous

            Then again the only thing they do is 'build', never innovate.
            A good example is kaiming he, who is really american, but of course in his chinksect ways has dedicated his career to copy-pasting other people's ideas and claiming to be sole inventor. Unlike some of his compatriot who actually steal by scoping via espionage, he just does a blatant rip off months after the original paper is released, which is arguably even worse.

  2. 1 year ago
    Anonymous

    bot thread

  3. 1 year ago
    Anonymous

    fax machine good, AI bad
    big robot better

    • 1 year ago
      Anonymous

      newbie...

      • 1 year ago
        Anonymous

        >newbie
        ok i'll leave and leave your site to die, bye bye

    • 1 year ago
      Anonymous

      >spoiler

      • 1 year ago
        Anonymous

        is that a monkey?

      • 1 year ago
        Anonymous

        > forgot to add BOT
        BOT is one of the only board that feels "BOT". The rest is either leftoid cringe or rightoid moronation

        • 1 year ago
          Anonymous

          are bot and /jp/ the other BOT boards?

      • 1 year ago
        Anonymous

        How can you be this mentally ill my guy? To be this far up your ass that you claim using the spoiler tag = troony

  4. 1 year ago
    Anonymous

    ai means love and japan is having an incel problem

    • 1 year ago
      Anonymous

      underrated post

      China doesn't either and they're a huge producer of AI research

      Yeah because they break into university networks and steal the work of western researchers

    • 1 year ago
      Anonymous

      Citation needed
      I see Japanese names all the time in EBSCO. You might not see anything because they aren't the marketing prostitutes you need to be in North America to get any funding. You could be missing them from all the bullshit Indian papers that bury the system.

      Pic related it is an abstract from a paper I don't plan to read

      ai means love and japan is continues an incel problem

      They have some terrible hikikomori issues.

  5. 1 year ago
    Anonymous

    They are a bunch of volcano island monkeys
    >b-but their unique culture
    Literally all stolen from chinks

    • 1 year ago
      Anonymous

      The most accurate description.

    • 1 year ago
      Anonymous

      if it's stolen then how come Japanese culture is good and Chinese is shit?

      • 1 year ago
        Anonymous

        Because you know nothing about chinks you weeb

  6. 1 year ago
    Anonymous

    all money is going to robots for the elderly

  7. 1 year ago
    Anonymous

    Because AI research is just useless scum. In corporate right-wing Japan, it is not customary to scatter money on unpromising projects.

  8. 1 year ago
    Anonymous

    Japan has been technologically stagnant since like 2004

    • 1 year ago
      Anonymous

      >The world has been technologically stagnant since like 2004
      FTFY

  9. 1 year ago
    Anonymous

    >Fifth-generation languages are used mainly in artificial intelligence research. OPS5 and Mercury are examples of fifth-generation languages, as is ICAD, which was built upon Lisp. [...]
    >Most notably, from 1982 to 1993, Japan put much research and money into their fifth-generation computer systems project, hoping to design a massive computer network of machines using these tools.
    >However, as larger programs were built, the flaws of the approach became more apparent. It turns out that, given a set of constraints defining a particular problem, deriving an efficient algorithm to solve it is a very difficult problem in itself. This crucial step cannot yet be automated and still requires the insight of a human programmer.
    Because they were scammed by shitlang shills.
    https://en.wikipedia.org/wiki/Fifth-generation_programming_language?wprov=sfla1

  10. 1 year ago
    Anonymous

    Japs were only innovating because of US money and now they have a new pet in ROC.

  11. 1 year ago
    Anonymous

    AI is a gimmick and the Japanese are a practical people.

    • 1 year ago
      Anonymous

      massive cope

    • 1 year ago
      Anonymous

      Nips are the least practical people in the world. They value appearance over all else.

    • 1 year ago
      Anonymous

      bump

  12. 1 year ago
    Anonymous

    wtf I wanted to get a postgraduate in Japan on something machine learning related, is that a bad idea?

  13. 1 year ago
    Anonymous

    Doesn't Ken Kutaragi have an artificial general intelligence R&D company in Tokyo?

  14. 1 year ago
    Anonymous

    too busy building e-girl robot waifus.
    which is a shame because they'll all be programmed with purple haired sjw AI that complain about how someone whose age starts with a 2 shouldnt even be talking to anyone whose age starts with a 1

    • 1 year ago
      Anonymous

      Good thing my age starts with a 5

  15. 1 year ago
    Anonymous

    No soul => no creativity

  16. 1 year ago
    Anonymous

    @hardmaru is japanese
    also, Shunichi Amari proposed the idea of natural gradients.

    • 1 year ago
      Anonymous

      Natural gradients are a non-thing, and hardmaru is a nobody. Nice self-own, weeb. You're on the same level as those morons who'd quote chomsky as relevant to AI research.

      • 1 year ago
        Anonymous

        Natural gradients are a thing. Yoshua Bengio himself has a paper on it (Revisiting natural gradient for deep networks)
        hardmaru has many publications at top-tier venues.

        • 1 year ago
          Anonymous

          You can shut up instead of digging your hole further with clinically moronic "ideas".
          Natural gradients never have and never will be relevant or used by anyone for anything, which makes them a non-thing.
          Yoshua does not have a paper on this, pascanu does. Yoshua's name is there because in non-math research, you put the name of the PI/lab manager/etc. (name varies by field, in CS it is usually RD) as last author.
          By the same token, David Ha virtually never worked on anything on his own, his students did. As for his students, due to his moronation, they also work on fricking nothing like GAs (a tech that never got any traction and was abandoned in the 80's), on which he is not advancing anything. His lab's publications are mostly in no-name and pointless places like GECCO, artificial life and various workshops, or worse, at modern NIPS (any publication at NIPS past 2016 is worthless because tech companies took over around that time and gutted the previously research-centric, scientific nature of the conference to make it into another advertisement and recruiting venue instead).
          Once again, none of his work or his student's work has been heard of or used by anyone because nothing he does is relevant, good, or useful, making him a nobody.

          • 1 year ago
            Anonymous

            Are you Yann? Why are your standards so high for being relevant to AI research? By your standards, the vast majority of research in AI is irrelevant.

            • 1 year ago
              Anonymous

              >Why are your standards so high for being relevant to AI research?
              Obviously because it's a definitional question
              >By your standards, the vast majority of research in AI is irrelevant.
              Duh? Same in every other field for that matters.
              Papers don't have to be like greedy layerwise pretraining to be 'relevant', but need to at least be at the level of dropout or adadelta (an example of work that is now deprecated but was not only relevant in its time, but also the foundation of the current SOTA), or td-backgammon (DL is so powerful that everyone stopped caring about RL, which never had any similar super-result and had to rely on DL to delay its spiral into irrelevancy, but it was an important work at the time, the first AI capable of playing backgammon at a pro level and showcased how powerful RL (and the TD-lambda algo) can be, even that algo was quickly abandonned) or even the imagenet model now colloquially referred to simply as 'imagenet' (the name of the competition which it won), which was nothing new and merely a demonstration of what deep learning could do on bog standard hardware at the time.
              But he doesn't even have a paper on the level of DRAW (a now completely forgotten paper that used gaussian attention to let a model 'attend' to different parts of the image as it generated them in a recurrent setting that was quite the buzz at the time, one of the early papers that popularized attention mechanisms).

              • 1 year ago
                Anonymous

                Put another way, if we were in the 80's, his work would probably be able of gaining relevancy since someone else might be able to do good work based on his, or vice versa (as GAs were popular at the time). But we're not, and therefore his work is just completely useless. It's like the Eugene expert system where they did a big show of 'passing the turing test' by having 3 judges and paying one of them to claim the bot fooled him, even though that isn't even remotely how turing tests work and how, when it was made public, it was clear that even a child couldn't be fooled by it as it was worse than even cleverbot at the time.
                Eugene would have been relevant in 1960... maybe. You can't claim that the eugene team is relevant at all as is.

                >a tech that never got any traction and was abandoned in the 80's
                That was the criticism with neural nets before the 10s. But GAs are due for a resurgences on AI chips reach mass adoption stage. Related to GAs, genetic programming will be the only way to rate and rank all the vast amounts of useless ML research in order to get something usable.

                We should be using genetic programming for all our ML research right now as an extension of the scientific method, but the computation simply isn't there. Hence the research methodology is to think up an idea, test it and write some useless paper. That is no way to build true expertise.

                >That was the criticism with neural nets before the 10s.
                Not really, they had proven themselves since like 2003 or so. The problem was two-fold: DEEP neural nets, and neural nets' compute efficiency vs metric efficiency.
                I get your point, though.
                >But GAs are due for a resurgences on AI chips reach mass adoption stage.
                GAs don't benefit at all from "AI chips". They don't make use of highly vectorized ops, and unlike neural nets, make excellent used of highly parallelized, non-streaming architectures (i.e. CPU clusters), which were never in short supplies and continue to be cheaper and easier to use than GPU clusters. "AI chips" are specifically designed to be more specialized GPUs and thus are of no help here.
                I'm all for research in "things that don't work" for the same reason you almost pointed out (you should have said "the 90's" because that's when a bunch of papers came out claiming deep learning could never work, for instance), it's just that it won't be relevant until it's finally good. I'm not saying those irrelevant researchers are never going to be relevant, yoshua was irrelevant until his key works (but he was always particularly astute and visionary) (cont)

              • 1 year ago
                Anonymous

                > GAs don't benefit at all from "AI chips". They don't make use of highly vectorized ops, and unlike neural nets, make excellent used of highly parallelized, non-streaming architectures (i.e. CPU clusters), which were never in short supplies and continue to be cheaper and easier to use than GPU clusters. "AI chips" are specifically designed to be more specialized GPUs and thus are of no help here.
                Of course they do. AI chips aren't going to be less general than GPUs, but more so. GPUs are only good for embarrassingly parallel workloads, but if you look at some of the videos from Tenstorrent or Cerebras for example, you will see that their chips fill in many gaps that the GPUs don't. I think it was Jim Keller of Tenstorrent who said that GPUs are for the kinds of graphs where the nodes do not communicate with each other, which is suitable for graphics. But ML requires more than that.

                Currently, we are bottlenecked in the kind of algorithmic research we can do because GPUs are only a good fit for supervised learning. You can't use them for game AI or simulators effectively, but you could AI chips for example. The reason it works for big labs because they have huge capital.

                It would be really hard to do something like write a genetic programming system on a GPU, but it would be easy on a PIM chip. And once you could do that, you could give it the right pieces and have it experiment with different learning algos on its own. You could make a game or a simulator and have it generate its own data without the need to pass data back and forth between the CPU and GPU like now.

                GP is not the same as GA, but even if you were optimizing NNs using GAs, you'd still need to crunch those forward passes on a GPU or an AI chip.

              • 1 year ago
                Anonymous

                You clearly have no clue what "AI chips" are or do. You can try googling instead of telling everyone how clueless you are about the topic.
                In particular, stop watching propaganda and believing it.
                >Currently, we are bottlenecked in the kind of algorithmic research we can do because GPUs are only a good fit for supervised learning
                Lolno. Beside the fact there is no architectural difference between supervised and unsupervised or semi-supervised.
                >simulators
                GPUs are routinely used for all kinds of simulations.
                >game AI
                Is a purely rule-based system almost always (notable exceptions include constraint systems like in FEAR), so now you're back to thinking 60's and 80's tech is totally great guys. The 20 years 100 "researchers" have to spend hand-tuning every rule is totally viable vs running a deep learning model for 10 minutes to outdo all this effort. Totally!
                You don't even begin to grasp what the pros and cons of these methods are. Why? If you are interested in anything AI related, there is an extremely vast body of literature on all aspects of historical and current AI techniques. It's all available at your fingertips. once again, www.google.com.

              • 1 year ago
                Anonymous

                No need to be an butthole about it just because I disagree with you.

                >You clearly have no clue what "AI chips" are or do.
                They are literally PIM chips in an 2d or 3d array. They have some goodies for accelerated matrix multiplication, sparsity as well conditional execution. Mentally, I am imagining them as CPUs with a small amount of fast local memory, and no shared one. This is cool as it will me an opportunity to do true multicore programming without the huge hassles of current CPU/GPU models.

                >Lolno. Beside the fact there is no architectural difference between supervised and unsupervised or semi-supervised.
                This is true, but in my view it is exceedingly unlikely that the kinds of algorithms the brain uses falls into what the GPUs could be good at at. With devices that have independent communicating cores we'd have a lot better chance figuring them out. Backprop is the enemy we must overcome. I think we'll find specialized memory components whose properties might enable them to be optimized in ways much more efficient than backprop.

                >GPUs are routinely used for all kinds of simulations.
                I don't doubt that, but if you've tried doing RL, the overhead of batching as well as moving data between the CPU and GPU is extremely painful.

                (1/2)

              • 1 year ago
                Anonymous

                >but in my view it is exceedingly unlikely that the kinds of algorithms the brain uses falls into what the GPUs could be good at at.
                It's a well-accepted fact that ML doesn't work like the brain at all, but then again planes don't flap their wings either.
                There has been lots of research in biologically motivated stuff (projective gradients, leabra, hebbian learning, etc.) but it doesn't work well in practice (the key ingredient of everything seems to be gradient descent and we don't really know how the brain does it, but we know it's really not gradients, although we also have evidence that it's gradient-like. This is kinda strange because we also know the brain's neurons have different kinds that operate differently, and have recursive (not recurrent) connections that we just can't model in ML because it's too computationally expensive -- but no special chip can fix this part).
                >With devices that have independent communicating cores we'd have a lot better chance figuring them out.
                We have had those since forever. Green array chips are even clockless. All that is, once again, useless for those purposes.

                >Backprop is the enemy we must overcome.
                I basically agree with this, but the few attempts in this direction don't really go anywhere. Gradients are just too powerful.

                > Not only is DL (language models, remember these?) far more useful for this, if you mean to evaluate the combination of methods in practice, there has been lots of research on this around 2016-2018 (then nobody cared because it was all underwhelming: the problem is that human intuition is far more resource-efficient there than running a dozen models for days at a time to get the algo to find the best one). The state of the art when I kept up was MaML and its variants (gradient descent-based). GAs were attempted but were always shit, RL was attempted but was also shit, and what worked-kinda was recurrent models looking at the dataset and/or the model arch and 'editing' it based on the data (trained on a large set of data + arch -> performance data), so here once again DL is the winner.
                GAs are not capable of finding novel stuff out of the blue either, and aren't even suitable for hyperparameter search (it's been tried, random search still works better, and grid search is still fine).
                The thing that makes me suspicious about this story is that current day evo algorithms are not even close in creativity to what we see in nature. So my guess is that the researchers didn't push them hard enough. We are yet to see artifical evolution that can develop its own library of components and compose them in flexible ways that nature can. Don't you think that capability is exactly what we need for ML research? Also it is not really a problem to throw in a NN in there somewhere to make the sampling go in a better direction. In my view, the efforts of the reinforcement learning researchers over the past decade were so poor, that I can't imagine an automated system (with the right hardware) doing a worse job than them at replacing backprop with something better.

                >So my guess is that the researchers didn't push them hard enough.
                That basically goes without saying, but the problem is as usual one of efficiency x capcity (billions of years of parallel evolution on every planet in every cell at once to get one viable bootstrap cell, vs our computers). Obviously ML is far more efficient even at the model level. Plus we don't actually understand the dynamics of evolution. This is why we need a breakthrough.

              • 1 year ago
                Anonymous

                >We have had those since forever. Green array chips are even clockless. All that is, once again, useless for those purposes.
                I am not familiar with these, but I dare you to look at the videos on the two companies I've mentioned's Youtube channels and tell me they are the same.

                >This is why we need a breakthrough.
                It is not going to come from people who are poking around with backpropagation. I tried it and failed, and now my mindset is to figure out what is needed for the next attempt to be successful. The answer is always better hardware.

                >but no special chip can fix this part.
                I'll note this just because it is interesting. This stretches what I mean when I say hardware, but it is an easy counterpoint to what you are saying even if you won't believe my claims about general purpose AI chips.

                https://futurism.com/startup-computer-chips-powered-human-neurons

              • 1 year ago
                Anonymous

                You keep posting adverts, PR and fiction as your sources/inspirations. Stop it. Try actually understanding what is really happening before you can think of what really can or can't be done. This is achieved through reading published papers, or the blogpost of prominent researchers (not ~~*philosophers*~~ and ~~*psychologists*~~ and ~~*futurologists*~~ and other fiction authors), technical books, and going from there.
                Meme AI chip companies have been a thing since the 1990's (FPGAs for hopfield nets, DSP chips for OCR, etc.) and they've never produced anything anyone ever found advantages with. IBM tried in-memory chips and ~~*brain-inspired architecture*~~ chips since 2017, they're worthless. Google has its TPUs that only provide an advantage over GPUs if your model is specifically designed with TPUs in mind, making them useless in practice unless you're google with a massive team of engineers dedicated to modifying the system for max TPU compat.
                Of course cerebras' garbage chips and troonytorrent's even hilariouser ones aren't the same as green array chips because... they are just matrix multiplication ASICs. There is no "communication between components" or any other nonsense your fiction writers told you about. Cerebras's at least aim to accelerate training (and fail at it if your model isn't a bog-standard transformer), but troonytorrent's are literally just for inference, thus aren't even relevant to your moronic "argument".

              • 1 year ago
                Anonymous

                >Is a purely rule-based system almost always (notable exceptions include constraint systems like in FEAR), so now you're back to thinking 60's and 80's tech is totally great guys. The 20 years 100 "researchers" have to spend hand-tuning every rule is totally viable vs running a deep learning model for 10 minutes to outdo all this effort. Totally!
                Sarcasm doesn't translate too well over the Internet. You have to keep in mind that the 60s idea were tried on 60s hardware before being abandoned. I am predicting that they will start working on future hardware. The main benefit better hardware will bring to game AI is the ability to run so many different ideas and test them automatically. Who knows, maybe disparate researchers found independent key components of tomorrow's algorithms that don't work in isolation, and lack the insight to bring them together. A powerful enough automated system could see the search space a lot more clearly.

              • 1 year ago
                Anonymous

                >You have to keep in mind that the 60s idea were tried on 60s hardware before being abandoned.
                The problem is that hardware was never the problem here, human effort was. You needed decades and triple digit skilled scientists to build your system. You still need that right now unless you use... DL. DL can fill the database for you much better and faster than your huge team of humans. The next problem is that the quality of the solution does not scale well with the size of the db you need (which has repercussions on solution time). Once again, DL can just 'compress the DB' in its weights and thus it's again pointless to think about those expert systems. One major issue with those systems is also the impossibility to interpolate. DL does not have that problem (in fact, it sometimes interpolates a bit too much, but that's something more pleasing to users than the reverse).
                tl;dr: the problem with expert systems, symbolic AIs, fuzzy logic and all that crap has PRECISELY NOTHING to do with compute.
                (incidentally, GAs or GP in general may in fact be bottlenecked at some level by compute, even though there has also not been any ideas suggested that would fix the efficiency issues).
                >The main benefit better hardware will bring to game AI is the ability to run so many different ideas and test them automatically.
                This will never happen because game AI does not try to be good. There have been good, even adaptive game AIs before and the idea was always abandoned because the AI was too hard for casuuuls, moreover it makes it harder to design encounters correctly for the game designers.
                Just like in the expert systems case, you don't understand at all what the pros and cons of those systems are (or even the requirements).
                >Who knows, maybe disparate researchers [...]
                That's language models and it's already being done.

              • 1 year ago
                Anonymous

                Currently the problem with GAs is that they're no more efficient than any other monte carlo method in theory, and in practice they're less efficient than RL (in terms of convergence rate). They also tend to be harder to specify than RL, despite RL being notoriously finicky and requiring intense manual parameter/problem design management). Only a breakthrough-type paper will change this.
                Regarding
                >rate and rank all the vast amounts of useless ML research in order to get something usable.
                Not only is DL (language models, remember these?) far more useful for this, if you mean to evaluate the combination of methods in practice, there has been lots of research on this around 2016-2018 (then nobody cared because it was all underwhelming: the problem is that human intuition is far more resource-efficient there than running a dozen models for days at a time to get the algo to find the best one). The state of the art when I kept up was MaML and its variants (gradient descent-based). GAs were attempted but were always shit, RL was attempted but was also shit, and what worked-kinda was recurrent models looking at the dataset and/or the model arch and 'editing' it based on the data (trained on a large set of data + arch -> performance data), so here once again DL is the winner.
                GAs are not capable of finding novel stuff out of the blue either, and aren't even suitable for hyperparameter search (it's been tried, random search still works better, and grid search is still fine).

              • 1 year ago
                Anonymous

                > Not only is DL (language models, remember these?) far more useful for this, if you mean to evaluate the combination of methods in practice, there has been lots of research on this around 2016-2018 (then nobody cared because it was all underwhelming: the problem is that human intuition is far more resource-efficient there than running a dozen models for days at a time to get the algo to find the best one). The state of the art when I kept up was MaML and its variants (gradient descent-based). GAs were attempted but were always shit, RL was attempted but was also shit, and what worked-kinda was recurrent models looking at the dataset and/or the model arch and 'editing' it based on the data (trained on a large set of data + arch -> performance data), so here once again DL is the winner.
                GAs are not capable of finding novel stuff out of the blue either, and aren't even suitable for hyperparameter search (it's been tried, random search still works better, and grid search is still fine).
                The thing that makes me suspicious about this story is that current day evo algorithms are not even close in creativity to what we see in nature. So my guess is that the researchers didn't push them hard enough. We are yet to see artifical evolution that can develop its own library of components and compose them in flexible ways that nature can. Don't you think that capability is exactly what we need for ML research? Also it is not really a problem to throw in a NN in there somewhere to make the sampling go in a better direction. In my view, the efforts of the reinforcement learning researchers over the past decade were so poor, that I can't imagine an automated system (with the right hardware) doing a worse job than them at replacing backprop with something better.

              • 1 year ago
                Anonymous

                i think you'd like this guy

          • 1 year ago
            Anonymous

            >a tech that never got any traction and was abandoned in the 80's
            That was the criticism with neural nets before the 10s. But GAs are due for a resurgences on AI chips reach mass adoption stage. Related to GAs, genetic programming will be the only way to rate and rank all the vast amounts of useless ML research in order to get something usable.

            We should be using genetic programming for all our ML research right now as an extension of the scientific method, but the computation simply isn't there. Hence the research methodology is to think up an idea, test it and write some useless paper. That is no way to build true expertise.

          • 1 year ago
            Anonymous

            I thought natural gradients were interesting, that parameter space isn't Euclidean and that that means the direction of steepest ascent isn't the naked gradient. Do you say it's a non-thing just because it's not practically useful? I vaguely recall that there are practically useful techniques that can be shown to approximate the natural gradient, e.g. Adam. Though I guess skeptics would say that it's just another second-order technique.

            • 1 year ago
              Anonymous

              >I thought natural gradients were interesting
              Of course they are.
              So is hebbian learning and the leabra framework.
              So are hopfield networks.
              RL and linear programming are far more interesting than neural networks and gradient descent.
              But ultimately, none of that actually works.
              >Do you say it's a non-thing just because it's not practically useful?
              Yes.
              >there are practically useful techniques that can be shown to approximate the natural gradient, e.g. Adam
              But that's pointless because 1- it wasn't developed in the optic of natural gradients and 2- it doesn't matter whether or not it approximates the natural gradients and importantly 3- it works no matter how good or bad that approximation might be.
              Also, always be careful when you see 'x is shown to approximate y' (or to be equivalent to y), those claims almost always require heavy qualifications (for example, "if the function is both l-smooth and l-lipschitz then the absolute difference is smaller than epsilon as soon as iterations approach lambda (a function of model specific details, usually)", but in practice, the functions we optimize are neither l-smooth nor even l-lipschitz).

              • 1 year ago
                Anonymous

                >those claims almost always require heavy qualifications
                In machine learning as a whole*
                The theory is far behind the practice here. Even when you look at the single greatest theoretical result in ML (nesterov momentum), not only does nobody understand the paper, nesterov himself is unable to explain the proof in a way others can understand, it's still heavily qualified despite working outside of norms, AND nobody uses nesterov momentum anymore.

  17. 1 year ago
    Anonymous

    japan is extremely conservative outside of tokyo and maybe a few other cities. they were still using floppy disks until sony refused to keep manufacturing them

  18. 1 year ago
    Anonymous

    They want a true AI, not pajeet scam

  19. 1 year ago
    Anonymous

    wrong

  20. 1 year ago
    Anonymous

    AI is dekinai who can't handle Japanese

  21. 1 year ago
    Anonymous

    >AI = soulless
    >Japan = soul
    Theres your answer.

    • 1 year ago
      Anonymous
  22. 1 year ago
    Anonymous

    AI PIGGU GO HOOOOOOOOOMU

  23. 1 year ago
    Anonymous

    When you think of japs and AI research, you think of video games?

    • 1 year ago
      Anonymous

      Seriously Japanese mathematicians laid foundation to some of the latest sota (maruyama method, ito calculus, etc.)
      But brainless of BOT would disagree

      • 1 year ago
        Anonymous

        It's hilarious how much you weebs cope and seethe at all times.
        >nobody did nothing and nobody has ever heard of it or used it
        >therefore it's revolutionary and it's the basis of everything people do
        The absolute state.

        • 1 year ago
          Anonymous

          >Itou calculus
          >nobody has ever heard of it or used it
          The state of BOT

          • 1 year ago
            Anonymous

            Your tears are my sustenance, moronic weeb.

      • 1 year ago
        Anonymous

        Because of some aspects of japanese culture like workaholism, loyalty towards corporations, personal responsibility on work matters, etc.

        Japs make outstanding engineers, but shit scientists. Also generally quite bad at computing. They were more relevant in the early years when programming involved much more bit wrangling and was closed to engineering than how it is now, much higher level and closer to maths.

        Exceptions, not the rule

  24. 1 year ago
    Anonymous

    they like supercomputers

    • 1 year ago
      Anonymous

      That's related to why they're so far behind: they're obsessed with 1980's american technology and techniques and that's about all they know or do.

      • 1 year ago
        Anonymous

        Boomers still run everything there. In fact it's probably older than boomers.

    • 1 year ago
      Anonymous

      That's related to why they're so far behind: they're obsessed with 1980's american technology and techniques and that's about all they know or do.

      >far behind.
      Have you ever use or even know what these super computers are for lol
      Imagine having access to 10 Tesla V100 and distributed training, while watching animu with your thinkpad.
      BOT is really a relic of the past

      • 1 year ago
        Anonymous

        >10 V100
        >supercomputer
        Take your meds, schizo.

        • 1 year ago
          Anonymous

          Yes, idk about fugaku, but super computer can have GPUs like the Taiwania series.
          A super computer that only relies on CPU is even more awesome

          • 1 year ago
            Anonymous

            Yes, they can have GPUs. 10 V100 is what an underfunded uni's lab has in the shoe closet, you dense motherfricker.

            • 1 year ago
              Anonymous

              I know
              I just picked an example that is easy to understand
              >10 V100 is what an underfunded uni's lab has in the shoe closet
              Lol you overrated the uni labs capability

              • 1 year ago
                Anonymous

                >I was just pretending to be moronic
                I accept your surrender.

            • 1 year ago
              Anonymous

              >tfw my uni doesn't have a single V100

              • 1 year ago
                Anonymous

                Damn, what noname shithole are you at? My lab had literally 7 students and we had 4 v100, the DL lab had a few hundreds.

  25. 1 year ago
    Anonymous

    It's bloat, in ideological form. Japan only supports bloat as function, proof: XMB interface.

  26. 1 year ago
    Anonymous

    You're asking why people who use
    >Ruby
    >fax machines
    >Windows 7
    >floppy disks
    can't into AI? I rest my case.

    • 1 year ago
      Anonymous

      7
      Worse, for the majority it's MacOS
      Only hospitals and the likes use windows xp and 7

  27. 1 year ago
    Anonymous

    Canada had a shitton of government-funded AI research going through the AI winter, unlike other nations. Many top AI figures today are Canadian for that reason.

  28. 1 year ago
    Anonymous

    Jap here.

    It's not that Japan doesn't want to advance AI research.

    1. The population is declining, so we've got less people. Little immigration to offset that.
    2. Job market liquidity is very poor. Companies are forbidden from firing employees unless some very strict criteria are met. Even if an employee quits and tries to find a new job, companies are hesitant to hire new talent because it's very hard to fire him if he turns out to suck.
    3. Japanese politicians are terrified of words like "lay offs" or "going out of business" so they prop up shitty loss making companies with government subsidies. Thus human capital is tied up in stagnant/dying zombie companies.
    4. Bureaucrats and politicians ban anything at the drop of a hat if any Karen decides to complain. (Drones are basically unflyable by hobbyists in Japan, and Uber was basically shut out of Japan due to Taxi lobbyists)

    As a result, Japan doesn't have enough skilled workers, existing skilled workers are tied up in shit companies, those shit companies refuse to die due to government subsidies, and dumb government officials kill innovation.

    • 1 year ago
      Anonymous

      >Jap here.
      Stopped reading there, matthew-kun.

      • 1 year ago
        Anonymous

        K

        • 1 year ago
          Anonymous

          post waifu pillow or gtfo

          • 1 year ago
            Anonymous

            I don't have one, do here's a weeab regex book.

            • 1 year ago
              Anonymous

              I like how your post text doesn't match.

            • 1 year ago
              Anonymous

              I want it

              • 1 year ago
                Anonymous

                me on the left

            • 1 year ago
              Anonymous

              qt

            • 1 year ago
              Anonymous

              >weeab regex book
              lel

        • 1 year ago
          Anonymous

          post 醤油顔 or you're lying

    • 1 year ago
      Anonymous

      >Drones are basically unflyable by hobbyists in Japan
      Holy shit fricking based. Drones are an evil privacy destroying technology and anyone flying a drone near my land should have no legal protection from me putting buckshot into that demonic wiretap.

      • 1 year ago
        Anonymous

        I mean, I'd get your sentiment if the ban was just over residential areas/places with people. But you can't even fly a drone in an absolutely empty park. It doesn't make any sense tbqh.

        • 1 year ago
          Anonymous

          Personally it should be taken one step further and all satellites should be illegal. They provide nothing of value that we can't replicate with ground-based technology, and there is no such thing as a consumer sattelite, they are all used by and for billion dollar companies and governments who use them to violate your privacy.

          • 1 year ago
            Anonymous

            >They provide nothing of value that we can't replicate with ground-based technology
            Is there a ground based alternative for GPS?

            • 1 year ago
              Anonymous

              yeah IIRC, you just have to have linked up towers all over the earth that can triangulate positions from one another

      • 1 year ago
        Anonymous

        Birdshot is better for drones.

    • 1 year ago
      Anonymous

      this is what every aging developed country inevitably becomes
      can relate a bit while living in italy (even if we aren't as efficient and rich as you)

      • 1 year ago
        Anonymous

        Jap here.

        It's not that Japan doesn't want to advance AI research.

        1. The population is declining, so we've got less people. Little immigration to offset that.
        2. Job market liquidity is very poor. Companies are forbidden from firing employees unless some very strict criteria are met. Even if an employee quits and tries to find a new job, companies are hesitant to hire new talent because it's very hard to fire him if he turns out to suck.
        3. Japanese politicians are terrified of words like "lay offs" or "going out of business" so they prop up shitty loss making companies with government subsidies. Thus human capital is tied up in stagnant/dying zombie companies.
        4. Bureaucrats and politicians ban anything at the drop of a hat if any Karen decides to complain. (Drones are basically unflyable by hobbyists in Japan, and Uber was basically shut out of Japan due to Taxi lobbyists)

        As a result, Japan doesn't have enough skilled workers, existing skilled workers are tied up in shit companies, those shit companies refuse to die due to government subsidies, and dumb government officials kill innovation.

        same in france and spain

    • 1 year ago
      Anonymous

      damn
      i thought you were better
      that sounds just like Italy

      out of our old BOTang only german bros managed to be somewhat advanced

      • 1 year ago
        Anonymous

        Everything he posted also applies to Germany though

        • 1 year ago
          Anonymous

          ah frick

          • 1 year ago
            Anonymous

            CHECKEM

      • 1 year ago
        Anonymous

        >german bros managed to be somewhat advanced
        l-lol?

  29. 1 year ago
    Anonymous

    The Japanese understand to take care of their own people, unlike the literally culturally-suicidal West. They have massive make-work culture to take care of their elders and to foster the creation of companies by the young. You don't get the same looking-after-our-own culture in the West.

    It follows that they use all means to tasks in the hands of humans for this reason, such as relying on the fax machine as a perfect example. There are many such cases: everything there is paper-driven including applying for internet, phone, power etc etc etc. You don't do any of it online.

    The West don't care about destroying themselves. Hence our facination with AI to destroy our own jobs, our culture, our heritage, and eventually the elimination of human lives.

    Only an idiot automates himself our of a job.

    • 1 year ago
      Anonymous

      >Muh degenerate west vs based trad east
      Explain why China is going all in on AI then? Japan is just a boomer-run society. Even more than the West.

      • 1 year ago
        Anonymous

        China is an alternate reality morally corrupted Japan
        It's like they accidentally opened a portal to hell and the miasma engulfed the whole nation and people

        • 1 year ago
          Anonymous

          don't you mean Japan is an alternate reality morally corrupted China?

    • 1 year ago
      Anonymous

      Imagine working your whole life and wasting your potential at a Japanese company shuffling paper and doing things in a pre-internet way because of boomers refusing to let shit companies die.

  30. 1 year ago
    Anonymous

    They are irrelevant in everything, not just AI

    • 1 year ago
      Anonymous

      This

      >even Canada
      Canada mogs literally everyone (except for the US) in AI. They fricking invented deep learning.

      Kinda also this. In truth deep learning was a mostly US, somewhat euro thing (tesauro, of td-backgammon fame, and his cronies, were involved in it in the 90's), but it was thought to be too hard to train until greedy layerwise pretraining by bengio. This combined with follow up work by lecun and hinton sealed the deal and brought in deep learning as the new paradigm.

  31. 1 year ago
    Anonymous

    >even Canada
    Canada mogs literally everyone (except for the US) in AI. They fricking invented deep learning.

  32. 1 year ago
    Anonymous

    Why are so many people irrelevant compared to Santiago Gonzalez?

    • 1 year ago
      Anonymous

      did he an hero?

      • 1 year ago
        Anonymous

        Not at all, he's a fully leveled up Wizard.

    • 1 year ago
      Anonymous

      Wasn't his dream to work at apfail or something? Quite a downgrade to be at cognizant. To be expected since he's always been a tard.

      • 1 year ago
        Anonymous

        Working at Cognizant means no sick time.
        I was fired from there because I would on occasion get a neurological blurriness and severe headache where reading text was impossible so I had to go home.

  33. 1 year ago
    Anonymous

    because you can't use AI to make anime titty sex games
    if you can't make anime titty sex games your technology is irrelevant to the Japanese

  34. 1 year ago
    Anonymous

    japs suck at innovating, they're just really good manufacturing above average quality products

    • 1 year ago
      Anonymous

      >japs suck at innovating
      more like they fricking stopped innovating because of their risk adverse culture.
      Why the frick do you think they still use fax machines? Because they're scared to do change.

      If it earns us money then why take risk?
      Thats literally japanese culture.

      • 1 year ago
        Anonymous

        Fax machines are better technology than email. They use them because they aren't moronic westerners who want planned obsolescence and creating worse technology to replace old better technology - see electric vehicles compared to ICE.

        • 1 year ago
          Anonymous

          >fax machines are better
          yeah they're better that they basically made the japanese hospital system shut down because they overloaded when the coof happened.

          Also dont compare Fax machines to ICE and EVs moron.

          • 1 year ago
            Anonymous

            Westerners don't innovate, they devolve. The green movement is trying to ban mankinds very first invention by making it a crime to have campfires and wood stoves.

            • 1 year ago
              Anonymous

              EVs are certainly better than ICE that they dont "emit" pollution
              the only thing making EVs shit is that the battery is hard to recycle. Other than that the EV is 100% recyclable like ICE in where you can just turn it back to scrap metal.

              • 1 year ago
                Anonymous

                >completely avoids mentioning anything about reliability and performance which 99% of people care about
                >focuses on pollution and recycling which 99% of people don't give a shit about because that's not the job description of an automotive vehicle

              • 1 year ago
                Anonymous

                Why cant you just fricking accept that EVs and ICE has disadvantages and advantages? Do you really want to devolve this discussion to polBlack person shit?

              • 1 year ago
                Anonymous

                I and every non bugman car user prefers to be able to drive for more than 100 miles and being able to refuel in 60 seconds as opposed to 60 minutes. You will never be a real car.

              • 1 year ago
                Anonymous

                EVs are terrible. Their fuel is not available in liquid from.

                Frick! They don't even have modular batteries that you could swap out like Lego bricks.

  35. 1 year ago
    Anonymous

    Japan lost innovation spirit thanks to "saving" their economy from total collapse back in the early 90s. Their entire economy, much like most western nations, is artificial. They can't bear the idea of letting the dead wood clear out from the economy combined with extremely socialist workers rights laws means the entire thing has ground to a complete standstill. This is essentially what all western economies will become Japan is simply about 20 years ahead of everybody else.

    While everybody else's totally fake economies really got going after 2008 Japan's been at this almost 20 years longer.

  36. 1 year ago
    Anonymous

    Japan doesn't innovate in anything anymore. I don't know what it is, but their economy seems completely moribund.

  37. 1 year ago
    Anonymous

    Because what’s the point. AI is going to end up being like programming. Sure there might be money in a programming language and maintaining tools for it. But there is also money is using the language and toolset to build something. AI is the same. Own AI make money. Make something with AI still make money.

  38. 1 year ago
    Anonymous

    >Why Is It Unusual For Japanese People To Use Computers?
    https://www.animenewsnetwork.com/answerman/2016-05-23/.102406

    >According to a 2015 study by the Japanese Cabinet Office,
    >only 30% of Japanese high schoolers use laptops, and
    >only 16% use desktop computers.
    >(In the US, 98% of our teenagers use one or the other, with similar numbers out of the UK.)
    >According to one study I found, about 50% of Japanese households have a computer,
    >but many people don't use them, or only use them for games or web browsing.
    >The majority of Japanese students use the internet exclusively through cell phones

    • 1 year ago
      Anonymous

      Basically the case everywhere in the world now, japs were just ahead of the curve.

    • 1 year ago
      Anonymous

      now how about 2020 or 2022. This was even before vtubers were popular. Im 100% certain that PCs are more popular than ever because of it.

  39. 1 year ago
    Anonymous

    AI is one of the messiest things on earth, software wise. Right after webdev. I don't find it at all surprising that nips stay away from it, it would probably give them a heart attack. You need chink or pajeet mentality to fully embrace stuff like that, where you're used to nonsensical chaos.

  40. 1 year ago
    Anonymous

    These motherfrickers invented Ruby and still probably use it for all their scripting , which is very moronic

  41. 1 year ago
    Anonymous

    Japs dont have souls and are uncapable of any creativity

    • 1 year ago
      Anonymous

      But manga is the best creative medium right now.

      • 1 year ago
        Anonymous

        >manga
        Troon garbage

        • 1 year ago
          Anonymous

          Nothing is better than it. Cope.

          • 1 year ago
            Anonymous

            YWNBAW

          • 1 year ago
            Anonymous

            But manga is the best creative medium right now.

            hahaha troon

      • 1 year ago
        Anonymous

        >99% copy-paste everything from story, art and character archetypes
        >creative at all, let alone best creative

        • 1 year ago
          Anonymous

          What's the best current creative medium then?

          • 1 year ago
            Anonymous

            shitposting

          • 1 year ago
            Anonymous

            Still painting, although "variants", including the likes of microlithography, also qualify.

            shitposting

            No, even shitposting is dead. It's all lowest-effort zoomer shit now. Back in my days, trolling was a art.

            • 1 year ago
              Anonymous

              fr fr no cap

  42. 1 year ago
    Anonymous

    This fricking "AI" is founded and developed by selling data corporations and not have anything common with real value and usefulness, just technologies for just better tracking and selling your ass and kill value of human effort

  43. 1 year ago
    Anonymous

    They know it's evil

  44. 1 year ago
    Anonymous

    I'd say theres little to be proud of in working to make the human mind irrelevant and worthless to the success of a country's rulers, but considering that they've never fricking let up on universal masking I don't think they value people very much over there either.

  45. 1 year ago
    Anonymous

    They aren't, it's just that the field they are most interested in (robotics) doesn't get much attention

  46. 1 year ago
    Anonymous

    The "why is Japan..." threads on multiple channels are silly. The BOT jannie/creative thread marketing team needs something new. C'mon guys, do something new instead of the same ole same ole.

  47. 1 year ago
    Anonymous

    because having functional robots that you can understand and actually help you is better than having ai spying on you and offering you ads.

  48. 1 year ago
    Anonymous

    is anyone actually reading these walls of text?

  49. 1 year ago
    Anonymous

    Other than Anime they can produce shit, they are that incompetent

  50. 1 year ago
    Anonymous

    I love Japan

    • 1 year ago
      Anonymous

      same

  51. 1 year ago
    Anonymous

    they already produce the best notebooks and pens though.
    What more do you need

    • 1 year ago
      Anonymous

      They most definitely don't. As usual weebs will hype anything japanese even though the best they can achieve is perfect mediocrity (much better than chink shit, for instance, but pales in comparison to everything else).

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *