Comparing AI Coding Assistants: Cursor AI vs VS Code Copilot

In today’s rapidly evolving landscape of developer tools, AI-powered coding assistants have emerged as game-changers for programmers of all experience levels.
This blog aims to provide fellow developers with an in-depth, hands-on comparison of these powerful tools. Beyond surface-level features, I’ll dive into how these AI assistants perform in real-world coding scenarios, examining their strengths and limitations.
Throughout this analysis, I’ll focus particularly on their agent/chat functionalities.

Key Terms: Understanding LLMs and AI Agents

Before diving into our comparison of Cursor AI and VS Code Copilot, let’s clarify some fundamental concepts that will appear throughout this analysis:

Large Language Models (LLMs)

Large Language Models, or LLMs, are sophisticated AI systems trained on vast amounts of text data to understand and generate human-like text. These models can recognize patterns, context, and nuances in language, allowing them to:
Write and debug code across numerous programming languages
Understand natural language queries about programming concepts
Generate explanations of complex code
Suggest improvements to existing codebases
Both Cursor AI and VS Code Copilot are powered by underlying LLMs that enable their core functionality. The capabilities of these tools are directly influenced by the particular LLMs they utilize and how they’ve been fine-tuned for coding-specific tasks.

AI Agents

While LLMs provide the foundational language understanding, AI agents represent a more advanced integration of these models into interactive systems. An AI agent:

  • Maintains context throughout a conversation or coding session
  • Can perform sequences of actions to accomplish goals
  • Often has the ability to access external tools or resources
  • May be able to break complex problems into manageable steps
  • Can adapt its responses based on user feedback

The “agent mode” in Cursor AI and the “agent/chat mode” in VS Code Copilot refer to these more interactive implementations that go beyond simple code completion. These agent capabilities allow developers to have extended dialogues about their code, request multi-step assistance, and collaborate with the AI in a more natural workflow.

When using AI agents, users can choose which LLM to use to achieve their goals. Below is a graph of all current popular LLMs as of March 2025. These LLMs are graded and ranked by using peer reviews on their ability to solve coding problems and how well the model can reason with itself to achieve the desired outcomes. Claude Sonnet 3.7 is currently the benchmark for LLMs but this will probably change over the coming months as more advancements are made.

AI Agents

Pricing Plans

Cursor AI Pricing: Options for Different User Needs

Cursor AI offers a tiered pricing structure designed to accommodate different user types, from hobbyists to pro to business professionals. Here’s a breakdown of their current pricing model:

  • Free Tier
  • Pro Tier – 20$
  • Business Tier – 40$

See below the list of features included which each tier:..AI Agent

In the cursor account settings, you can also view the usage details.

 

AI Agent

VS Code Copilot pricing

AI Agent

Test Setup: Building a CRUD Application with Angular and .NET 8

For this comparative analysis, I’ll be testing both Cursor AI and VS Code Copilot on their ability to guide me through creating a basic CRUD application. This practical test will help demonstrate how each tool handles a common development scenario that many developers face regularly.

The Challenge: Angular + .NET 8 CRUD Application

I’ve designed a test case that incorporates several modern web development technologies and patterns:

  • Frontend: Angular
  • Backend: .NET 8 Web API
  • Database: SQL Server with Entity Framework Core
  • Architecture: Service pattern implementation for the backend
  • Functionality: Basic CRUD operations with two primary views:
    • A form page for data entry/editing
    • A listing page showing all records with edit/delete capabilities

The Prompt

To ensure a fair comparison, I’ll use the following prompt with both AI coding assistants.

“I want to make a basic web application using Angular as the front end and .NET 8 Web API as my backend. I also want to use a SQL Server database using Entity Framework Core.

For my backend, I want to use the service pattern. For my frontend, I want two pages: one page contains a form that saves to the database, and the second page contains all the saved form records from our database.

I want this to be a basic CRUD application.

Can you guide me through setting this up step by step?”

Cursor AI Test Results

Initial cursor setup and screens:

A blank folder was created called CursorDemo. This folder was opened in the CursorAi Workspace. On the right side of the screen the user can select between 3 modes. Agent, Ask and edit. Users can also change which LLM they are using, please see both screens below:

  1. Agent Mode – Agent Mode functions as an interactive coding partner capable of complex reasoning and multi-step tasks. It maintains context across your entire project, allowing it to understand broader development goals and provide comprehensive solutions. This mode excels when you need assistance with architectural decisions, complex implementations, or when you want a conversational approach to solving development challenges.
  2. Ask Mode – Ask Mode provides a streamlined question-and-answer interface for quick coding queries. It’s optimized for specific questions about your code like “How do I implement pagination in Angular?” or “What’s wrong with this function?” While less contextually aware than Agent Mode, Ask Mode delivers faster, more targeted responses ideal for discrete problems or clarifications without requiring extensive project understanding.
  3. Edit Mode – Edit Mode focuses specifically on modifying existing code. When you select code and engage Edit Mode, Cursor AI analyses the selection and offers targeted improvements, refactoring suggestions, bug fixes, or optimizations. This mode is perfect for enhancing code quality, fixing issues, or implementing requested changes to specific sections of your codebase rather than generating entirely new implementations.

AI AgentAI Agent

Timelapse of project creation

A full video was recorded of the session. Below is a description and summary of the inputs and results.
Using agent mode, I input the following prompt and clicked send.

Agent AI

Cursor now starts to anaylse the folder and understands the folder is empty. It generates a command to create a client and API folder. Cursor allows the user to run each command themselves or cursor can automattically run each. I opted to run each command myself as it gave a better understanding as to why certain errors were occuring, it also allows the developer to have real time oversight of everything being created in the project and allows the developer to adjust if needs be.

AI Agent

Cursor then creates the web API project.

Once complete, the entity framework core packages were installed for our SQL server database.

Once the basic API project and dependencies were installed, the agent navigates to the frontend directory and begins to install the Angular client.

After the client project has been setup, the agent started to work on the code itself, initially edited our API by editing our CORS policy to help the API communicate with the client.

The agent begins creating the controllers, models, services and interfaces. As no specific requirements were created, the agent decides to create a product class, with a corresponding controller and service.

The agent then starts to create our first database migration

At this point, the agent started encountering errors, when running the update database command, an error was being returned which indicated there was an issue with the culture identifier. I tried multiple times to get the agent to resolve this issue but the it ended up trying to fix code that wasn’t the issue. I had to goggle the issue myself and quickly found an answer.

Once I resolved the issue I continued with the workflow. It began to create the services on the client and the components.

At this point I wanted to test my client. The following image was the current project. It has successfully created a products page and add products page which correctly navigated between themselves.

I now wanted to test the connection between the client and API. I encountered some issues with the connection to the database but the project did not have context of my SQL Server database so this was an issue I resolved. Note – Once the connection to the database failed on multiple occasions the agent tried to change course and implement an in-memory data storage. I had to stop the agent and go to a previous breakpoint to revert the changes.

I eventually fixed my connection string and tried to connect to the client. I got an error in the browser. I took a screengrab of the error and pasted it into the agent. I asked the agent to help me fix it. The agent then began to search through all relevant directories to track down the source of the error.

The agent found that there was an issue with the ports that the client and API were using.

Once this issue was resolved the API and client were successfully communicating with each other and the app was fully functional. Below are the final product screens.

Test Summary: Cursor AI Agent for CRUD Application Development

My experience using Cursor AI’s agent mode to build a full-stack CRUD application with Angular and .NET 8 yielded promising results. Here’s a summary of my test:

Time Investment and Results

The entire project took approximately 1 hour to complete from initial prompt to working application. The end result was a functional system with:

  • A working Angular client application with multiple components
  • Complete CRUD functionality (Create, Read, Update, Delete)
  • Successful API communication between the frontend and SQL Server database
  • Product management functionality including listing, creation, editing, and deletion

Error Handling Experience

While the process wasn’t completely error-free, the issues encountered were relatively minor and straightforward to resolve. This suggests that Cursor AI’s agent mode provides code that’s largely production-ready, though still requiring some developer oversight.

Key Strengths Observed

  • Generated a complete, working solution across multiple technologies
  • Correctly implemented the service pattern as requested
  • Successfully connected all layers of the application (UI, API, database)
  • Produced a functional application within a reasonable timeframe

This test demonstrates that Cursor AI’s agent mode can significantly accelerate the development of standard web applications by handling boilerplate code generation and architectural implementation, allowing developers to focus on business logic and specific requirements.

Best Features and Limitations of Cursor AI’s Agent Mode

Standout Features

Error Diagnosis via Screenshots

One of the most practical features was the ability to screenshot error messages and paste them directly into the editor. This streamlined debugging by allowing the AI to visually analyze error messages and provide targeted solutions without requiring manual transcription or explanation of complex errors.

Navigation Through Development History

The ability to return to previous breakpoints in the development process proved invaluable. This feature allowed for easy reference to earlier implementation decisions and helped maintain continuity when working through multi-step development processes, essentially creating development checkpoints.

Limitations

LLM Availability Constraints

A significant limitation was the inconsistent availability of the Claude 3.7 LLM due to high traffic. This forced fallbacks to alternative models during peak usage times, potentially affecting the quality and consistency of the assistance provided during the development process.

Terminal Command Management

The experience of stopping or editing terminal commands lacked polish. A more intuitive interface for modifying in-progress commands or gracefully canceling operations would significantly improve the workflow, especially when working with complex build processes or when needing to adjust commands after initial generation.
These observations highlight how Cursor AI’s agent capabilities can significantly accelerate development while also pointing to areas where improvements would enhance the overall developer experience, particularly around resource availability and terminal interaction.

 

Copilot AI Test results

VS Code Copilot Test Results: A Contrasting Experience

The testing experience with VS Code Copilot’s agent/chat mode revealed significant limitations compared to Cursor AI when attempting to create the same CRUD application using an identical workflow.

Key Disappointments

Command Execution Limitations

Unlike Cursor AI, Copilot’s agent could not directly execute terminal commands, which proved to be a major workflow impediment. This forced a more manual approach to project setup and dependency installation, requiring developer intervention for tasks that Cursor AI handled automatically.

Incomplete Project Generation

Rather than generating a complete project structure through standard CLI tools, Copilot attempted to manually create individual files. This approach resulted in:

  • Multiple missing files from the expected project structure
  • Incomplete implementation of necessary components
  • Lack of proper project initialization

Workflow Incompatibility

Following the same development workflow that succeeded with Cursor AI proved impossible with Copilot. The lack of integrated terminal command execution created constant interruptions in the development process, requiring manual intervention for tasks that should have been automated.

Failed Application Delivery

Despite using identical prompts and following similar guidance, I was unable to achieve a working client application using VS Code Copilot’s agent. The disconnected nature of the assistance provided fragmented the development process and prevented successful completion.
This stark contrast highlights a fundamental difference in how these AI assistants approach complex project creation tasks. While Cursor AI functions more as an integrated development partner with system-level capabilities, Copilot’s current implementation appears more focused on code suggestion than comprehensive project orchestration.

To Note – Co Pilots agent solution is currently still in beta.

Conclusion: Navigating the Rapidly Evolving AI Coding Assistant Landscape

This comparison between Cursor AI and VS Code Copilot reveals a significant gap in capabilities when it comes to comprehensive project generation and workflow integration. Cursor AI’s agent mode demonstrated impressive abilities to execute terminal commands, maintain project context, and deliver a working full-stack application within a reasonable timeframe.

However, it’s crucial to recognize that we’re evaluating these tools during a period of unprecedented innovation in AI development assistants. The landscape of AI coding tools is evolving at a staggering rate, with new capabilities, models, and approaches emerging weekly.

The winning strategy isn’t necessarily committing exclusively to today’s best option but maintaining flexibility to incorporate the continuous improvements that will inevitably reshape how we build software.

Follow Us

For more technical insights follow us on the channels below: