Cursor's AI code review tool Bugbot now resolves 78% of the bugs it flags, a 26-point jump from its 52% rate at launch in July 2025. The improvement comes from a self-improving system called Bugbot Learning that converts developer feedback into automated review rules.
What Happened
Cursor shipped Bugbot Learning, a feature that lets its code review bot improve accuracy through real-time feedback from pull request reviews. The system captures three types of signals: downvotes on unhelpful comments, developer replies that explain context, and patterns from human reviewer comments that flag issues Bugbot missed.
Each signal gets converted into a candidate rule. The system tests these rules against incoming pull requests, promoting successful ones to active status and disabling consistently problematic ones. Over 110,000 repositories have enabled the feature since its beta launch, generating more than 44,000 learned rules.
Why It Matters
AI code review tools have struggled with high false-positive rates that train developers to ignore automated feedback. Bugbot's approach flips this by making the tool learn from each dismissal. At 78%, developers act on roughly four out of five Bugbot findings before merging their code.
The gap between Bugbot and competing tools is notable. Cursor reports that Bugbot leads CodeRabbit at 48.96%, GitHub Copilot at 46.69%, Codex at 45.07%, and Gemini Code Assist at 30.93%. That 15-point lead over the next-closest competitor suggests per-repository learning creates a compounding advantage as teams use it.
Key Details
- 78% resolution rate, up from 52% at July 2025 launch
- Three feedback signals: reactions, developer replies, human reviewer patterns
- 110,000+ repositories with learning enabled
- 44,000+ learned rules generated across all repositories
- Rules are tested against live PRs before promotion to active status
The update arrives alongside Cursor 3's agent workspace redesign, which shipped parallel agents and Design Mode in early April. Together, these updates position Cursor as a full-lifecycle AI coding environment that covers both writing and reviewing code.
What to Do Next
Bugbot Learning is available now in repository settings. Enable it to start building custom rules from your team's review patterns. The system begins learning immediately from reactions and replies on Bugbot comments. Teams already using Cursor 3 get the feature automatically.
For teams evaluating AI code review, the GitHub Copilot Rubber Duck approach of pairing multiple AI models offers an alternative strategy. As the JetBrains survey showed, 90% of developers now use AI coding tools, making the quality of automated review a competitive differentiator.