0 Comments // Reading Time: 10 min.
Welcome back to our AI article series!
This article is part of a comprehensive series on AI, which has already compared practical tools for research, text generation, image generation, video generation, and language processing. There is also an article on terms and basics related to AI. Another focus is on AI tools for software development, which is divided into four parts due to the large number of offerings:
- Auto-complete & IDE integration
- Web- and cloud-based coding environments
- Dialog-based wizards for programming
- Pair programming & explanatory AI (which we will discuss in this article)
This final section deals with AI-supported partners in the development process that go beyond mere code completion. They can explain lines of code, justify logical steps, automatically generate tests, or act as interactive partners during debugging. Some even proactively take on tasks, suggest improvements, or help with documentation – similar to a virtual pair programming partner.
The selection in this article includes specialized tools such as Junie and Cody (Sourcegraph), universal solutions with an extended coding focus such as Tabnine Chat and Windsurf (formerly Codeium). The aim is to highlight the functions, strengths, and possible limitations of these assistants and to provide suitable recommendations for different use cases.
Junie
Contextualized interactive pair programming & automated suggestions
- Very deep contextual reference for complex projects
- Still limited
model variety
Cody
Large codebase teams needing fast search, explanation & refactoring
- Seamless integration with Sourcegraph Code Search
- Dependency on Sourcegraph infrastructure
Tabnine Chat
Developers who prefer fast AI dialogue in the IDE
- Quick, context-based answers directly in the editor
- Little in-depth explanation of complex algorithms
Windsurf
Users looking for free, feature-rich chat support in IDEs
- Free to use with a wide range of features
- Cloud processing can raise data protection concerns
- Provider (year of release): JetBrains (early access since early 2025)
- Free to use: Basic use with code completion and local models possible
- Konto erforderlich: Ja – JetBrains-Account und passende IDE-Lizenz (IntelliJ IDEA Ultimate, PyCharm Professional; WebStorm & GoLand werden unterstützt)
- Account required: Yes – JetBrains account and appropriate IDE license (IntelliJ IDEA Ultimate, PyCharm Professional; WebStorm & GoLand are supported)
- Premium access: AI Pro for individuals (€10/month or €100/year), AI Ultimate for daily use for individuals (€20/month or €200/year); different prices apply for businesses.
- Models & features used: Uses various LLMs such as Anthropic Claude Sonnet (3.7/4.0) and OpenAI GPT-5 (enabled by default since plugin update), selectable in settings
- Special functions: Junie can independently plan and execute entire workflows, including code changes, test runs, execution of terminal commands, and progress reports in the IDE context.
Who is Junie suitable for?
Junie is aimed at developers who work in the JetBrains ecosystem and want to use AI as a partner for complex, multi-step tasks such as coding, testing, or refactoring.
Terms of use & tips
Junie is installed via the plugin directory of JetBrains IDEs. Once activated, the tool can be used via the sidebar or menu – tasks such as code generation, test execution, or debugging can be delegated step by step. It works similarly to an autonomous agent: Junie suggests a plan, works through it, and reports interim results. The model can be changed in the settings (e.g., Claude or GPT-5), with GPT-5 currently recommended as the default.
Legal aspects & data protection
Junie works in the cloud: code is sent to external LLM services. JetBrains promises transparency and control, but developers working on sensitive or horizontal projects in particular should review their internal guidelines.
Advantages and disadvantages of Junie summarized
|
|
- Provider (year of release): Sourcegraph (founded in 2013); Cody as an AI tool from 2023 to June 2025
- Free to use: Until July 2025, yes – however, Cody Free and Pro will be discontinued (see note below)
- Account required: Yes – Sign in with Sourcegraph account or GitHub/SSO
- Premium access: Cody Enterprise will remain available; Free and Pro plans will be discontinued
- Models used: Uses modern large language models (LLMs) combined with a context-rich knowledge graph of its own code base. Supports models such as GPT-4 Turbo, Claude 2, Mixtral, and StarCoder.
- Editing features: chat interface, code explanation, refactoring, testing, documentation, context across repositories
Who is Cody suitable for?
Cody was particularly suited to developers working in teams who work directly in Sourcegraph and require comprehensive code context search as well as AI-powered explanations, refactorings, and test generation. Cody Enterprise continues to address this target group.
Terms of use & tips
Cody integrates with Sourcegraph, using repository information to provide highly accurate answers. The Enterprise plan allows governance features, SSO, and more granular rights management.
Legal aspects & data protection
Data processing is carried out via Sourcegraph in combination with model APIs (e.g., OpenAI). Enterprise instances can be operated locally or in a private cloud, ensuring that sensitive code repositories remain protected.
Important note (as of August 2025)
Sourcegraph will completely discontinue Cody Free and Pro on July 23, 2025. New registrations have not been possible since June 25, 2025. The new Amp tool is recommended as a successor, and bonus credits are available for those switching over. Cody Enterprise will remain available and will be supported in the long term.
Advantages and disadvantages of Cody summarized
|
|
- Provider (year of release): Tabnine (since 2013 as Codota, chat function available from mid-2023)
- Free to use: Yes – free basic version (Dev Preview) available.
- Account required: Yes – Tabnine account required; supports SSO and integration with IDEs
- Premium access: Dev ($9/month), Enterprise ($39/month)
- Models used: Tabnine's own proprietary AI models as well as optional third-party models such as Claude 3.7/4, GPT-5, Gemini 2.x, Gemma 3, Qwen 2.5, Mistral, and local models for enterprise use.
- Editing functions: chats and inline actions (plan, create, test, document, review, explain, maintain); context-based code suggestions in the IDE, explanations, test and document generation, bug fix help, code review agents, refactoring & integration of Jira
Who is Tabnine Chat suitable for?
Tabnine Chat is suitable for individual developers and teams who want to use secure, context-aware AI assistance directly in the IDE – with features ranging from code generation and component documentation to governance and team integration.
Terms of use & tips
Tabnine Chat is integrated as a plugin into popular IDEs such as VS Code, JetBrains, Eclipse, and Visual Studio – the chat is designed to work deeply within the coding workflow. Models can be switched with a single click – for example, to improve performance or meet higher data protection requirements. Features such as preset quick actions (e.g., explain, test) and chat response lengths (concise vs. comprehensive) enable flexible control.
Legal aspects & data protection
Tabnine emphasizes data protection: Standard models (including proprietary AI) do not store user data, and source code is not used for training. Enterprise plans offer highly secure options such as on-premises or VPC, admin controls, SSO, and license compliance mechanisms.
Advantages and disadvantages of Tabnine summarized
|
|
- Provider (year of release): Windsurf (first released as Codeium in 2022; rebranded as Windsurf in April 2025)
- Free to use: Yes – basic functions available via Windsurf plugins (formerly Codeium extensions)
- Account required: Yes – supports SSO logins; plugin integration in many IDEs such as VS Code, JetBrains, Neovim, and many more.
- Premium access: Pro ($15/month), Teams ($30/month), Enterprise (from $60/month)
- Models used: hybrid model strategy – in-house models: Llama 3.1-70B (base model), Llama 3.1-405B (premium module for paying users) & external models: GPT-5 (various reasoning levels) and Claude 3.5 Sonnet via API
- Editing features: Windsurf Editor with Cascade Agent: enables AI-driven coding, debugging, and autonomous task execution (flow state) & Windsurf plugins – autocomplete, chat, refactoring, debug help in IDEs supported
Who is Windsurf suitable for?
Ideal for developers who want to use an AI-native development environment – for example, in enterprise contexts with high data protection requirements and a desire for performance and flexibility. Also for users who need access to state-of-the-art models such as GPT-5, but want to use their own models at the same time.
Terms of use & tips
Plugins remain an option for existing IDE workflows – particularly useful for those switching over. Windsurf Editor, with powerful features such as Cascade Flow and Turbo Mode, is ideal for complete agent-based workflows. It has features such as tab jump, drag-and-drop, autonomous terminal execution, etc.
Legal aspects & data protection
Windsurf had announced that it would move away from third-party models and instead rely on its own in-house AI – due to compliance requirements such as FedRAMP High for enterprise customers. However, reconnecting to third-party models such as GPT-5 remains part of the strategy to offer developers the highest performance.
Advantages and disadvantages of Windsurf summarized
|
|
The tools compared address pair programming and explanatory work in different ways. Junie acts as an agentic partner directly in JetBrains IDEs: tasks are planned, executed, and transparently logged – great for complex, multi-step workflows (generation, testing, debugging). Cody is particularly powerful in combination with Sourcegraph when a large repository context is required (search, explanation, refactoring); however, Free/Pro were discontinued in July 2025, leaving Cody Enterprise as the option for companies with governance requirements. Tabnine Chat brings fast, context-aware dialogue to popular IDEs and can be operated with different models depending on data protection and performance requirements – suitable for teams that want practical chat support, testing, and documentation from a single source. Windsurf (formerly Codeium) continues as an AI-native IDE: agentic flows (Cascade), an editor-first approach, and a hybrid model strategy (own models plus connection to e.g., GPT-5) make it attractive if the entire development flow is to be consistently AI-supported.
Another example of an alternative is AskTheCode, a GPT optimized for programming support that provides targeted assistance with debugging, code explanation, and refactoring. These specialized GPTs run on the same model basis as ChatGPT, but are equipped with predefined roles, prompts, and workflows to solve specific tasks faster and in a more structured way. This allows even highly focused development workflows to be implemented directly in ChatGPT – without having to install an additional tool.
For JetBrains teams that need autonomous assistance, Junie offers the greatest leverage. Where Sourcegraph is already established and enterprise governance is important, Cody Enterprise remains the obvious choice. For broad IDE coverage, flexible model selection, and fast chat support, Tabnine Chat is recommended. Those who want to not only integrate AI but make it the core of their development environment are best served by Windsurf. For targeted workflows without additional installation, specialized GPTs such as AskTheCode offer an interesting, fast solution.
The decision ultimately depends on whether selective assistance is sufficient, whether an existing ecosystem (JetBrains, Sourcegraph) is used, or whether you want to transform the entire coding workflow using AI.
Comments