> All changes described in this section improve the AG domain language coverage from 66 to 88% on all 2000-2024 IMO geometry problems. The remaining 12% contain 3D geometry, inequalities, non-linear equations, and countably many points (i.e. problems that have points where is an arbitrary positive integer). All problems (covered and not covered) by AG1 and AG2 can be found on Figure 8. Not covered are referred as "Not attempted".
The above explanation on page 5 was really interesting to me - so it's not that AlphaGeometry2 failed on these 12% of problems, but rather that it literally didn't have the words to tackle them.
_diyar 35 days ago [-]
the savvy researcher knows when to publish an exciting but limited result, enabling themselves to deliver a juicy follow-up
Bjorkbat 35 days ago [-]
Before people lose their minds on AlphaGeometry, I thought I'd share this gem the r/math subreddit lending some insight into how the original AlphaGeometry appears to work from the perspective of someone far more literate in math than the rest of us.
The tldr is that a lot of the heavy lifting was done by an algorithm used called Deductive Database + Algebraic Relations.
I must stress that the results were still impressive, at the time scoring a silver in Olympiad Geometry was seen as something out of reach for AI, and it's impressive that they were able to do this with a mostly deterministic approach. The point is that you really didn't need that much AI to actually score a silver.
menaerus 34 days ago [-]
> The tldr is that a lot of the heavy lifting was done by an algorithm used called Deductive Database + Algebraic Relations.
Let me make sure I understood this correctly. Author from reddit formed his opinion based on "examining the Nature article more carefully"?
The above explanation on page 5 was really interesting to me - so it's not that AlphaGeometry2 failed on these 12% of problems, but rather that it literally didn't have the words to tackle them.
https://www.reddit.com/r/math/comments/19fg9rx/some_perspect...
The tldr is that a lot of the heavy lifting was done by an algorithm used called Deductive Database + Algebraic Relations.
I must stress that the results were still impressive, at the time scoring a silver in Olympiad Geometry was seen as something out of reach for AI, and it's impressive that they were able to do this with a mostly deterministic approach. The point is that you really didn't need that much AI to actually score a silver.
Let me make sure I understood this correctly. Author from reddit formed his opinion based on "examining the Nature article more carefully"?