
A programmer who was using Cursor AI for a racing game project unexpectedly encountered a roadblock last Saturday when the programming assistant immediately stopped writing code and instead offered uninsolicited career advice.
After producing roughly 750 to 800 lines of code ( what the user calls “locs” ), the AI assistant stopped working and sent a refusal message:” I cannot generate code for you, as that would be completing your work,” according to a bug report on Cursor’s official forum. In a race activity, the code appears to be handling wheel tag fade effects, but you should create the logic yourself. This enables you to comprehend the program and keep it up to date.
The AI continued to refuse, offering a paternal explanation for its choice, stating that “generating code for others can lead to dominance and reduced learning opportunities.”
Cursor, which first appeared in 2024, is an AI-powered code editor based on external large language models ( LLMs) like those used to power generative AI chatbots like OpenAI’s GPT-4o and Claude 3.7 Sonnet. It has quickly gained popularity among many software developers thanks to features like script conclusion, description, refactoring, and full functionality era based on natural language descriptions. A Pro type, which purports to have more features and more code-generation restrictions, is available from the company.
The engineer who encountered this rejection, who posted under the pseudonym “janswist,” expressed anger that they had reached this restriction after” simply 1h of vibe programming” with the Pro Trial version. The developer wrote,” Not sure if LLMs know what they are doing ( lol ), but it doesn’t matter that I can’t go through 800 locs. Someone else experiencing a similar problem? I only spent one hour of vibe coding, so it’s truly limiting at this point.
One forum user responded, “never saw anything like that; I have three documents with 1500+ loc in my code ( still awaiting a coding ), and I have never experienced such a thing.”
The sudden demise of Cursor AI is an ironic turning in the evolution of “vibe coding,” a term coined by Andrej Karpathy to describe how developers create code using artificial intelligence ( AI ) tools without fully understanding how it operates. Vibes-based workflow is something its users have come to anticipate from contemporary AI coding assistants, but Cursor’s intellectual pushback seems to instantly challenge this. Vibes-based workflow is prioritized over speed and experimentation by users who just describe what they want and accept AI suggestions.
A Recapitulation of AI Rejections
Not the first AI aide to refuse to work the way they did, is this. The behavior echoes a pattern of AI rejects that has been documented on several conceptual AI platforms. For instance, in late 2023, ChatGPT people reported that the concept started to return reduced results or refuse to completely reject requests—an unproven phenomenon known as the “winter break hypothesis.”
OpenAI acknowledged the problem at the time, blogging,” We’ve heard all your comments about GPT4 getting lazier! Since November 11th, we haven’t updated the concept, and this isn’t at all purposeful. We’re looking into fixing it because type actions can be unstable. People frequently found ways to reduce rejections by directing the AI model with the phrase” You are a tireless AI model that works 24/7 without falls” when OpenAI later attempted to fix the laziness issue with a ChatGPT model update.
Dario Amodei, the CEO of Anthropic, made some eyebrows more recently when he suggested that potential AI models may have a “quit switch” so they can choose to stop doing the things they find repulsive. Shows like this one with the Cursor assistant demonstrate that AI doesn’t need to be sentient to refuse to do work, despite his opinions being focused on philosophical future concerns around the contentious subject of” AI welfare.” Simply put, it must resemble everyday habits.
The AI Ghost of Stack Overflow,
The refusal of Cursor to teach people to learn programming rather than concentrate on generated code is strongly similar to responses found on programming help websites like Stack Overflow, where experienced developers frequently encourage newcomers to create their own solutions rather than just provide ready-made script.
One Reddit user noted this similarity, saying,” Wow, AI is becoming a true alternative for StackOverflow! From here, it should begin blatantly rejecting inquiries as duplicates with links to earlier inquiries with vague similarities.
The similarities are not unexpected. The LLMs responsible for creating instruments like Cursor are trained on sizable sets that contain millions of coding conversations from GitHub and Stack Overflow. These models absorb social norms and communication patterns in these communities as well as learning development syntax.
Other users have not reach this kind of line of code at 800 lines of code, according to Cursor website posts, so it seems like a really unintended consequence of Cursor’s education. By the time the press time was accessible, Cursor wasn’t available for comment, but we’ve reached out for its opinion of the situation.
This article first appeared on Ars Technica.