You can literally learn AI for free nowadays. Why arent you taking full advantage of internet anons

You can literally learn AI for free nowadays

Why arent you taking full advantage of internet anons

Unattended Children Pitbull Club Shirt $21.68

DMT Has Friends For Me Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

  1. 6 months ago
    Anonymous

    It might be more useful for me to teach a course than to take one.

    • 6 months ago
      Anonymous

      What would you teach and would u teach me

      • 6 months ago
        Anonymous

        I would teach how to use tools with pretty bubbles for n-ary fact type and workflow modeling. That's all you need to generate formalized yet natural-language entity descriptions, which can be used to generate databases, full applications, or AI prompts.

        • 6 months ago
          Anonymous

          My fricking sides. Again! Tell us another joke!

          • 6 months ago
            Anonymous

            I don't understand what you find humorous. Do you not see the pretty bubbles?

            >would u teach me
            Very Professional Tool With Pretty Bubbles #1

            Very Professional Tool With Pretty Bubbles #2

      • 6 months ago
        Anonymous

        >would u teach me
        Very Professional Tool With Pretty Bubbles #1

        Very Professional Tool With Pretty Bubbles #2

  2. 6 months ago
    Anonymous

    I did. When I was in school I have done a school work for machine learning, how backwards propagation works and how to train a model to classify the minst dataset.
    Its simple school math, if you can derivate a function its possible for you to understand with the 3b1b videos.

    Btw. Have you done anything? OP

    • 6 months ago
      Anonymous

      bullshit, it's impossible to comprehend. you type in something and a bunch of metal can talk to you like a real human being or make an image for you. that's not simple school math

      • 6 months ago
        Anonymous

        >tensors are impossible to understand, it's just magic
        I never liked the explanation for matrix multiplication. The teacher didn't know why it worked.

        • 6 months ago
          Anonymous

          Nobody knows "why" it works. According to the theory, it shouldn't work. The math says we should be waiting hundreds of years to get the kind of results we get in an hour of training (because the asymptotics don't make sense unless our entire function is always convex and we use all kinds of different tricks nobody uses nowadays like nesterov momentum and weight decay and yadda yadda, but we know those don't hold in practice).

          Ultimately it's just "well, there is a good function minimum in this spot and the model found it".

          I've always been frustrated that we don't know how to extract the rules learned by a deep learning system to then explore or entail them in a hard-coded rule-based engine (no matter how big). In my view, this is the end-goal of deep learning: if we ever manage to reach this, deep learning will henceforth be used to teach us how a task is done when we have no clue where to start, but have data to bootstrap the process, then we will extract from it the idea for the solution, and then program a system that does the task properly.

          I digress though.

          • 6 months ago
            Anonymous

            I'm just talking about matrices, not LLMs. You know, like [[1,0,0],[0,1,0],[0,0,1]] not trained vectors.
            Besides, "why" LLMs work is well-known to esotericists (read: people who have read too much /x/) like myself. It's because English and many other languages are based on older languages, which were designed for "casting" "spells" using a "grammar." Inherently, there is Greek isopsephy and Hebrew gematria within bible-influenced English that can be decoded by pattern analysis.
            Imagine inventing language 5000 years ago and finding that the letters that you used spoke on their own. This is where religion came from.

            • 6 months ago
              Anonymous

              LLMs use tokens, not words

              • 6 months ago
                Anonymous

                And tokens are encoded into numbers which are multiplied by matrices.
                But don't be hard on the man, he did admit he's a /x/ schizo.

              • 6 months ago
                Anonymous

                And English uses word roots, and isopsephy groups words by semantic meaning.
                What's your point?

                And tokens are encoded into numbers which are multiplied by matrices.
                But don't be hard on the man, he did admit he's a /x/ schizo.

                STFU tard

  3. 6 months ago
    Anonymous

    I have a PhD in deep learning. I worked professionally at microsoft as a deep learning researcher. Currently I work at a startup implementing deep learning and llm-based solutions for tech illiterate clients.

    There is 0 accurate content on the internet. It's not just a little off, it's complete nonsense. On the level of claiming 1+1=3 and deriving all kind of absolute nonsense from there (but every derivation not only is based on irrelevant axioms, they're also wrong).

    There are almost no resources to learn that content at all for a variety of reasons, but in the meantime the only documents worth a shit are:
    - Andrew Ng's lectures from last decade
    - Deep Learning book by Hugo Larochelle et al.
    - PRML, but it's more like a reference book than anything
    From there you have no choice but to read papers and ignore anything published past ~2016.

    • 6 months ago
      Anonymous

      Why ignore everything past 2016? Do you mean before 2016?
      I had a focus on ML during my CS degree but I got a bit rusty over the years, also never worked at a huge company like Microsoft. Today I started implementing a transformer model following Andrej Karpathy's tutorial.
      Do you want to share more about your LLM work?

      • 6 months ago
        Anonymous

        I mean past 2016. It's because the deep learning community has been adamantly against journal-based publication for ages, so they used conferences instead. However, around 2016-2017, conferences were taken over by corporations, turning what was once a nearly purely academic venue with quality peer review based on mostly merit and prizing itself on unparalleled reproducibility (typically double-blinded, which nobody else in academia even does by the way) into a recruiting event full of spam and scams prioritizing getting corpo papers through even when they're manifestos and not research.
        As in all things, money talks.

        >Today I started implementing a transformer model following Andrej Karpathy's tutorial.
        You should instead start with the attention is all you need paper, then learn read BERT, then the bigbird paper, then tricks like flash attention.
        Karpathy's a mixed bag, the premise of his posts is solid but then he gets all kinds of details wrong that you won't notice unless you actually know the material. Colah's much better but I don't think he's posted in ages.

        >Do you want to share more about your LLM work?
        Mix of things ranging from implementing full RAG-based """production""" systems to using regular deep learning approaches with a pretrained language model to replace meme shit like prompt-engineered openai api's for completely unsuitable tasks (on the level of doing text gen to output a label to do a binary classification task, instead of just using a tiny classifier that can do 1m queries a second on a single cheap cpu, nonsense on that level). I moved completely into the "engineering"/application side because companies all but stopped caring about development, they think they can use openai for everything and don't want to do anything but text anymore.

        • 6 months ago
          Anonymous

          As an aside, I'm perfectly aware that the llm papers I mention are >2017. The point is you can't randomly pick something anymore, and the signal to noise ratio is so bad it's barely even worth looking at anything starting in 2021.

        • 6 months ago
          Anonymous

          Do you have any knowledge with what is being done with music generation ai recently? When do you think the results will be reasonably good enough to be indistinguishable from professionally produced music, will it ever reach that point?

          • 6 months ago
            Anonymous

            I have never done anything with music before, new or old, so I can only mention the few things I've observed as a complete outsider. Namely, I see two main directions people are going into for music generation: spectrogram-based diffusion models and either spectrogram or sequence-based transformers.
            In my experience with both systems for non-natural language, non-image, non-music data, neither approach is suitable, and the results in music generation so far are exactly what I expect to see based on my observation on using these models for other tasks.
            In addition, transformers are very inefficient. The reason they're so popular in language tasks is because we have infinite data to pool from, and the lack of timestep dependency means you can run them crazy fast if you have the vram for it (they run faster by default due to better parallelism, but are also far easier to run in a distributed fashion on huge gpu farms).
            The problem is that I don't think we have infinite music data like we have infinite text and infinite images, so transformers for music generation are a deadend.
            My experience with diffusion models has simply been that anything not image-like fails on the same level as those spectrogram-based approaches, I think it's because the data is not sufficiently gaussian-distributed. Better noise schedule and better noise function choices could unlock music generation, and thus I am far more confident in diffusions for music than transformers.
            However, plenty of people have tried all kinds of ways to get better noise/noise schedules working for non-image data to no avail. A particularly notable string of failures is that of discrete diffusions, which are almost entirely unreproducible, and when they are reproducible they only work on one specific toy dataset.
            Another area of research not used in music that could supplant diffusions is the modern improvements to VAEs like intro-VAE, but hard to train.

            • 6 months ago
              Anonymous

              I don't think it will ever reach pro-like music any more (but also not any less) than diffusions are pro-grade art or llm's are pro-grade texts (in other words I am certain it will get so good you would be able to do that to generate assets for a game you only know how to do the programming for, and only smart people will even notice).
              But there is very little funding or interest in music generation at the industry and academia level (always been this way) which greatly hinders progress here.

        • 6 months ago
          Anonymous

          Thanks, I will check out some of the stuff you mentioned tomorrow. Already had a quick look at the "Attention is all you need" paper.

    • 6 months ago
      Anonymous

      >work at a startup
      Im sure it will fail due to mismanagement

      >impossible to learn with online resources
      Dumb frick. OP wasnt even talking about deep learning. Machine learning is simple to understand in concept. Its just a sum of weights and biases and their derivatives and the activation function.
      Obviously going any deeper is hard but that is not exclusive to any field.

      • 6 months ago
        Anonymous

        You just proved my point. You don't have the faintest idea what machine learning nor deep learning is.

        • 6 months ago
          Anonymous

          Youre right. Sorry .-.

          • 6 months ago
            Anonymous

            That's OK. I recommend using the material I posted (it's free). Even if you don't want to dig deep, the first few lessons from the andrew ng's lecture, for instance, or the first few chapters of deep learning, should give you a solid idea about the relevant terminology and how the pieces fit together.

    • 6 months ago
      Anonymous

      What about andrej karpathy's youtube videos? He has a video implementing MicroGrad and explaining everything.

      • 6 months ago
        Anonymous

        Don't know about his videos, so can't comment.
        But micrograd is implementation details for the actual computation engine used for the deep learning model implementation, not the deep learning implementation itself, so the topic is very different. None of the resources I provided would be useful to you if you're interested in those details, for example, for sure.
        Personally don't know any good resources for this, I just learned by implementing one on my own. Maybe the theano papers?

    • 6 months ago
      Anonymous

      Tell me about symbolic AI. Can it make a comeback?

      • 6 months ago
        Anonymous

        There are some attempts at combining symbolic AI with deep learning, and I think rule mining by deep learning for symbolic AI and similar systems is worth investigating in light of modern deep learning results. However, research in the area is extremely slow and receives no interest or funding, in large parts due to old stigma.
        I think we will see a comeback in maybe 5 years when no more engineering advance has been made in LLMs and people are coming down hard on the LLM high. Whether that comeback will actually result in anything is, of course, a gamble, as anything at that level of tech.

        • 6 months ago
          Anonymous

          Do you think it's irresponsible to allow autonomous vehicles to be controlled by something that isn't AGI?

          • 6 months ago
            Anonymous

            Not at all. In particular, all those tools are built by us to make our life easier, so of course their primary design premise should be that they are under our control. Then the exact mechanism should also be under individual control.

  4. 6 months ago
    Anonymous

    Because le AI is a stupid fad

    The real money is in property management

Your email address will not be published. Required fields are marked *