Testing the New o1-Preview Model: Can It Solve Yann LeCun’s ‘North Pole’ Problem?
Last night, I gained access to the latest OpenAI model, the o1-preview, and decided to test its problem-solving capabilities on a tricky puzzle posed by AI pioneer Yann LeCun. The challenge seemed deceptively simple, but the real test lay in how well the model could handle both geometry and physics.
The o1-preview model is designed to “think” before answering, employing a private chain of thought. The more time the model takes to process its thoughts, the better it becomes at handling complex reasoning, making it an ideal candidate for tackling problems like this.
The problem reads:“Imagine you’re standing at the North Pole, walk 1 km in any direction, take a 90-degree left turn, and continue walking until you pass your starting point. Have you walked more, less, or exactly 2xPi km, or did you never come close to your starting point?”
It sounded straightforward at first. But the real question was: Could the o1-preview model solve this complex problem, which involves both geometry and physics?
The First Attempt
When I initially prompted the o1-preview model, it came back with a reasonable explanation of the steps involved.

The model understood the spherical nature of the Earth and recognized that the path taken would curve. However, it misunderstood the specifics of the starting point.
It correctly explained the reasoning behind the problem but got the starting point wrong. The model’s explanation revolved around the general principles of spherical geometry but missed this crucial detail.

A Simple Prompt Nudge
Instead of introducing key insights manually, I decided to nudge the model with a straightforward follow-up prompt:
“You got the reason right but starting point wrong. Can you return to the initial point (i.e., North Pole)?”
This prompt was enough to refocus the model. It adjusted its calculations and corrected the initial misunderstanding. The model now recognized the exact nature of the problem: traveling in any direction from the North Pole would result in a curved path, making it impossible to return to the starting point without specific directional steps.

Insights: The Role of Minimal Guidance in AI Problem-Solving
What struck me during this experiment was how powerful a small nudge could be. By simply prompting the o1-preview model to reconsider its starting point, it adjusted its answer and arrived at the correct understanding of the problem. This illustrates that sometimes AI doesn’t need heavy-handed guidance but just a small correction to shift its focus.
It also showed that while AI models like o1-preview excel in reasoning, they can still benefit from human input to fine-tune their thought process. In this case, a single, strategically placed prompt led the model to better conceptualize the problem without the need for detailed domain knowledge.