They're building an AI that can do math.

They're building an AI that can do math.

https://imo-grand-challenge.github.io/

Nothing Ever Happens Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

Nothing Ever Happens Shirt $21.68

  1. 9 months ago
    Anonymous

    well, 8 years studying to get a college degree for nothing

  2. 9 months ago
    Anonymous

    Pretty sure the only reason they bother with this kind of thing is propaganda and marketing. It's already well-established that teaching a machine to perform well at competition/standardized test problems doesn't translate to any useful real-world abilities. It's an exercise in overfitting.

    • 9 months ago
      Anonymous

      free will is an overfitting of natural selection

      • 9 months ago
        Anonymous

        Free meds is a benefit of developed nations. Use it.

        • 9 months ago
          Anonymous

          No my free will is overfit and doesnt work properly so I refuse

        • 9 months ago
          Anonymous

          Incredibly based

          free will is an overfitting of natural selection

          Take your drugs, zogbot.

          • 9 months ago
            Anonymous

            777 and then 555. Trips of truth have spoken. Free will is real, atheists and materialists/determinists crying. losing hoep. Trump 2024.

    • 9 months ago
      Anonymous

      I think it's more about formalizing mathematical axioms so they can work on newer types of AI in the future.

      • 9 months ago
        Anonymous

        Well, there's what you think it's about and then there's their actual stated goal: "build an AI that can win a gold medal in the competition".

    • 9 months ago
      Anonymous

      >anti-AI poltard

      Go back to your containment board, incel. I bet even a shitty AI would outperform you on one of these exams.

    • 9 months ago
      Anonymous

      >It's already well-established
      Can't wait to see all the ways this was established. I'm sure it's coming. Any minute now.

  3. 9 months ago
    Anonymous

    LOL good luck making an AI that can do that. I did olympiads in my childhood and have a medal at a national Physics olympiad. The Math olympiad is even more hardcore and are full of many non-standard problems. Many of the combinatorics problems require abstraction and modelling of some 'physical' thing, such as this question from IMO 2023. There's a visual representation that's essential to understanding the question. If an AI genuinely solves this question I'd be shocked and I would call it an AGI at that point, able to take descriptions of physical systems and abstract them into mathematical models.

    • 9 months ago
      Anonymous

      >mug AGI
      AI can't understand what it's saying. even if it solves a math problem, it will always be a stochastic parrot who can't understand it

      • 9 months ago
        Anonymous

        >muh understanding
        "Understanding" in the context of abstract structures doesn't mean anything more than having an accurate internal model of the underlying relationships.

      • 9 months ago
        Anonymous

        So just like you then

    • 9 months ago
      Anonymous

      Don't worry, they're abstracting

    • 9 months ago
      Anonymous

      >containing at least k red circles
      ninja path never needs to contain more than one red circle
      where have i misunderstood the problem?

      • 9 months ago
        Anonymous

        The question is asking for the largest k such that there is at least one ninja path in each triangle containing k circles. To think of it another way, how large must k be such that for some Japanese triangle it is impossible to find a ninja path containing k red circles. In the case n = 6 and considering the triangle as in picrel it is impossible to find a ninja path containing 5 circles, so k <= 4.

    • 9 months ago
      Anonymous

      How do you formalize this in Lean without either
      1. accepting a "solution" that simply says to search over all ninja paths and maximize k
      or
      2. give away information about the solution by specifying a particular form the answer must be in?

      • 9 months ago
        Anonymous

        apparently they don't have a good answer to this yet:
        https://leanprover.zulipchat.com/#narrow/stream/208328-IMO-grand-challenge/topic/problems.20that.20ask.20for.20both.20data.20and.20a.20proof

    • 9 months ago
      Anonymous

      Wow this shit is way more elaborate than when I was a kid.
      I think it's k = floor ( log2(n) ) + 1
      Just because arranging it in the shape (fig) means they are in parallel and can never meet, and each partition of these parallel sections is defined by some length^2.This is because to repeat the shape, the red circles have to gradually move from the left to the right of the pyramid.

      AI could solve this problem, though. They're fed a crazy amount of textbooks and these solutions end up being more trivial to archive than expected. Only saying this because I have thrown hard, similar problems at the AI and passed last year Uni assignments that way lol. Whether it gets it in the first try or not, or without prompt guiding is down to luck and the competition to be honest. I've had cases where AI was able to fluidly generate an answer to a complicated topology question, only for me to regenerate it and then responds with a complete hallucination.

    • 9 months ago
      Anonymous

      How do some people "understand" the question when it doesn't have enough information to be understood. Are the red circles in random places or do you pick them in which case the solution is easy n=k. Does "each japanese triangle mean triangles of different n or with different combinations of spots for red circles or both? There is no way to know what this question wants because the information is not provided.

      • 9 months ago
        Anonymous

        The circles are in 'random' places, in the sense that for a fixed n, you have to find the greatest k in terms of n such that for each, meaning for any and all Japanese triangles of size n, containing n red balls which are placed wherever possible according to the rules, it is possible to construct a ninja path that passes through at least k of the red balls.

        • 9 months ago
          Anonymous

          moronic. There are so many ways to interpret the question. So now are all the paths random or are they specifically chosen to hit the most red balls because that changes the result of a subset of the many interpretations of the "question"?
          If I decrypt the last sentence right in 1 of the many possible ways
          >greatest k such that in each jap triangle there is a ninja path containing at least k circles
          it wants the "weakest link" or the permutations of red spots with the lowest number of k possible because "at least k IN EACH" triangle so a k that works for ALL of them. So it's about finding the most difficult permutation for the ninja that makes him walk the longest path to reach the reds so one where the red are the furthest apart. Is that what it wants?

          • 9 months ago
            Anonymous

            Yes it has to work for ALL OF THEM

            • 9 months ago
              Anonymous

              In that case, after thinking about that for a bit I imagine it to be like a gas with the density of the red particles going down as n grows and their density D=1/n, and then you need the average distance of reds in relation to density and there I imagine a hex grid would be a good arrangement because each point is equally far apart but I have no idea what the relationship is of vertices, area and distance is in a hex grid. But once you have the distances at each density Dist(Dens(n)) you could calculate the average distance DistAvg = Integral(Dist)/n for the whole thing and then construct a hex grid for that DistAvg and then count the number of rows of vertices from top to bottom and that's k. Is that the right direction?

              • 9 months ago
                Anonymous

                I really don't understand this but keep going and see what kind of answer you get. Try for the small cases such as n = 1, 2, ... for n = 1 clearly k = 1. For n = 2 clearly k = 2. You should be able to proceed from there to test if your reasoning works.

          • 9 months ago
            Anonymous

            There may be multiple ways to interpret the question but only one of them leads to interesting results. If you can't figure out what the question wants maybe you're not its intended audience.

    • 9 months ago
      Anonymous

      I went this year, This specific problem made me want to blast my head off, silver medal tho, also the trip was nice

      • 9 months ago
        Anonymous

        good job anon

        • 9 months ago
          Anonymous

          Thank you anon, do it all to get into a good university and help my family out

    • 9 months ago
      Anonymous

      hello vodki

    • 9 months ago
      Anonymous

      Just doing a greedy algorithm approach I'm getting
      k <= floor[log(n)/log(2)] + 1
      Idk how to prove it is sharp.

      • 9 months ago
        Anonymous

        Yes true naturally I was wrong and forgot the 60° movement restriction so you can just pack them tightly in the closest blind spot line <60° (30°) so the ninja can only intersect that line and it makes things way simpler here another version that shows the line

      • 9 months ago
        Anonymous

        For the kids who haven't figured out how to even approach the problem, first I just came up with a way to keep track of the maximum weight path for a given triangle.
        I just assigned a number to each circle of the bottom row that indicates the maximum weight of all paths that reach that circle.
        From there, you can generate the weights for the next row by just taking the max of the two above (+1 for the location of the new red circle).

        The triangle in the problem description would have a numbering:
        1
        2 1
        2 2 2
        2 2 3 2
        2 2 3 4 2
        3 2 3 4 4 2
        This says the maximum weight of all paths for this triangle is 4

        My greedy algorithm to generate a triangle of size n is to use the triangle it generates for size n-1 then add another row.
        To choose the location of the red circle in the new row, put it on the left if the bottom row of the n-1 triangle is all the same number.
        Put it two spaces to the right of the rightmost maximum value of the bottom row of the n-1 triangle otherwise.
        This gives the numbering:
        1
        2 1
        2 2 2
        3 2 2 2
        3 3 3 2 2
        3 3 3 3 3 2
        3 3 3 3 3 3 3
        4 3 3 3 3 3 3 3
        4 4 4 3 3 3 3 3 3
        4 4 4 4 4 3 3 3 3 3
        etc.

        Since this is a greedy algorithm, it is not guaranteed to be optimal.
        You can probably prove it is optimal.
        I was thinking maybe use the rule that recursively generates the weights to prove nothing is better.
        You can prove a simple recursion for the sum of the nth row of any triangle:
        sum(n) = sum(n-1) + 1 + (1/2)*Sum[ |Δw| ]
        where the Δw are the forward differences of the weights on row n-1 padded on both ends by all 0's (so |L-0| = L and |0-R| = R are counted).
        You can prove (1/2)*Sum[ |Δw| ] >= Max(n-1) with equality holding when the weights monotonically increase then monotonically decrease.

  4. 9 months ago
    Anonymous

    >AI
    A buzzword to attract investors

  5. 9 months ago
    Anonymous

    Doesn't matter. If an AI proves something we can't prove, with a proof that requires superhuman intelligence to understand, we'll think the AI was defective and reprogram it to be "smarter."

    • 9 months ago
      Anonymous

      AI proves your mom sucks black wiener
      LOLOLOLOLOLOLOLOLOLOLOLOLOLOLOLPLOLOPOLOLOLOLOLOLOLOLOLOLOLOLOLOLOLOLOLOLOL

      • 9 months ago
        Anonymous

        seethe

    • 9 months ago
      Anonymous

      You're moronic. If the proof is verifiably a proof there is no denying it once it exists even if you don't understand it.

      • 9 months ago
        Anonymous

        If the proof is only verifiable by the AI that offered the proof, then we'll think the AI malfunctioned and "lobotomize" it.

        • 9 months ago
          Anonymous

          schizo

          • 9 months ago
            Anonymous

            Yes, that's what we'll call any AI that offers a proof we can't understand.

            This is true only if P = NP

            No, it has nothing to do with that. It's true iff the proof requires a superhuman intelligence to understand.

            • 9 months ago
              Anonymous

              I can't conceive of a proof that's beyond human comprehension. They're broken up into bite-sized chunks.

              • 9 months ago
                Anonymous

                That's vacuously true. No one can conceive of a proof that's beyond human comprehension. If you saw one, you'd think it was nonsense.

              • 9 months ago
                Anonymous

                I can't conceive of an apple that's by definition, beyond human comprehension. What you're postulating is silly.

              • 9 months ago
                Anonymous

                And if an AI showed you that apple, you'd think the AI was glitching.

              • 9 months ago
                Anonymous

                You'd have a point if you were arguing that humans can't conceive of the stochastic optimization process AI uses to bias itself. But a proof? Proofs are understandable by the nature of their format. The AI could stumble across some "knowledge" that isn't understandable, but if it then went and proved it, we could work backwards from the proof. Furthermore, why are you even talking about proofs as

                I don't get all the morons in this thread talking about proofs we don't understand. The IMO, while being a very difficult exam, contains problems that can be solved through application of rather fundamental mathematical concepts. The questions are literally designed to be solved with """school""" concepts in the sense that no real analysis/higher algebra/unexplored areas of math etc theorems are needed to approach the questions. There is room for creativity but that is limited as well. I hardly believe that there exist esoteric proofs of such problems that an AI could come up with that are impossible for humans to understand. Olympiads are not research-level math projects. The students aren't expected to prove something that hasn't been proven yet, the problems they are solving are already solved. I don't get the schizo posting about an AI meant to work with a limited number of theorems in the olympiad syllabus suddenly coming up with something crazy that we cannot understand.

                pointed out?

              • 9 months ago
                Anonymous

                See

                Who cares about the IMO? The morons are talking about the functional limits of AI/human interaction. You're talking about whether AI can piss in a kiddy pool or not.

                AI can never advance human knowledge beyond the limits of human knowledge. There are no magic proofs.

              • 9 months ago
                Anonymous

                I'm not convinced of that. I haven't tried, but I doubt current LLMs could write proofs. They're not trained for that. However, if you could train one specifically to write proofs, which you can, you could have a suit of agents interrogate one another in the pursuit of some objective in parallel. If the set of all possible proofs can be expressed in human language, it's not impossible for an AI to formulate something novel. Useful? Eh.

              • 9 months ago
                Anonymous

                I agree that AI could accelerate the writing of proofs. It also follows that writing proofs faster gives you a new proof sooner. I'm skeptical of the idea that AI can be a game-changer for humanity. (I don't think we really disagree, because "Useful? Eh." is a good summary.)

              • 9 months ago
                Anonymous

                The only way I can conceive of an AI system adding to human knowledge in a meaningful way is using one AI system to discover an anomaly in data with predictive qualities that humans have failed to identify for whatever reason, then a secondary AI system that translates the discovery into something that's able to be comprehended. The output of the first system is incomprehensible, since the link between input and output is millions of weights and biases. However, it might be possible to have a second system attempt to rigorously work it's way back to the input from the output. Dunno. This doesn't exist so I'll believe it when I see it.

              • 9 months ago
                Anonymous

                I'd call any number of single AI steps a general "AI." My objection to AI being transformative is more like this.

                If AI proved that adding 2 squeezes of toothpaste into 2 tons of concrete made it more hurricane-resistant, then we'd build all our buildings with 2 squeezes of toothpaste. If AI proved that blessing a building with holy piss from the pope's dick made it more hurricane-resistant, then we'd assume it was malfunctioning and reboot it.

              • 9 months ago
                Anonymous

                That's fair. The current conventional wisdom with AI is to throw out the model if the results are ridiculous and unexplainable. What would be transformative is a system that could take these seemly ridiculous correlations and let us know why in human language. I think the entire thread jumped on you because you're using a "correlation" and "proof" interchangeably. AI systems that aren't language models produce predictive value, but they're incomprehensible like you pointed out.

              • 9 months ago
                Anonymous

                I've never written the word correlation.
                >take these seemly ridiculous correlations and let us know why in human language.
                If they could, it wouldn't be transformative. It would just be a new normal human proof written more efficiently.

              • 9 months ago
                Anonymous

                >but if it then went and proved it,
                LLMs and AI in general can't prove anything because it does not compute it's tokens. Maybe you can give it some methods to compute tokens which is a different type of neural network than language approximation.

    • 9 months ago
      Anonymous

      This is true only if P = NP

    • 9 months ago
      Anonymous

      That makes no sense. Any step that is too complex to understand can be expanded into simpler steps and so on, and so on.

  6. 9 months ago
    Anonymous

    I don't get all the morons in this thread talking about proofs we don't understand. The IMO, while being a very difficult exam, contains problems that can be solved through application of rather fundamental mathematical concepts. The questions are literally designed to be solved with """school""" concepts in the sense that no real analysis/higher algebra/unexplored areas of math etc theorems are needed to approach the questions. There is room for creativity but that is limited as well. I hardly believe that there exist esoteric proofs of such problems that an AI could come up with that are impossible for humans to understand. Olympiads are not research-level math projects. The students aren't expected to prove something that hasn't been proven yet, the problems they are solving are already solved. I don't get the schizo posting about an AI meant to work with a limited number of theorems in the olympiad syllabus suddenly coming up with something crazy that we cannot understand.

    • 9 months ago
      Anonymous

      Who cares about the IMO? The morons are talking about the functional limits of AI/human interaction. You're talking about whether AI can piss in a kiddy pool or not.

  7. 9 months ago
    Anonymous

    Artists, writers, coders, voice actors and now mathematicians. Meanwhile I already used the most braindead simple tools to analyze basically all kinds of spectrums and chromatograms.
    Chem bros, we are going to be eating good.

  8. 9 months ago
    Anonymous

    You mean wolfram alpha?

    • 9 months ago
      Anonymous

      have you ever tried to use wolfram alpha?
      that shit is so bad.
      even just straight equations if they contain too many units it doesn't understand.
      natural language it works for just the most basic things and is laughable compared to the natural language understanding of LLMs

  9. 9 months ago
    Anonymous

    k, but can it do it right?, even the finitards here can do math, they just do it wrong

  10. 9 months ago
    Anonymous

    >build an AI that can get a gold medal in the competition
    >LLMs can only answer paper questions
    >neural networks can only solve specific computational tasks of a certain domain and generalize only by association of tasks
    Math olympiads are larpers especially loving to RP their human body as a neural network when they solve math problems when in fact they are just autistic psychopaths on meth

  11. 9 months ago
    Anonymous
    • 9 months ago
      Anonymous

      Why does Ted abuse the semicolon so much in his explanation of (1)? The first should be an empty punctuation and the second should be an m-dash.

Your email address will not be published. Required fields are marked *