Documentation
v1.1.124

Support for GPT-4.1 & GPT-4.1-mini

We have upgraded our model lineup to OpenAI’s newly-released GPT-4.1 and GPT-4.1-mini, replacing the previous GPT-4o and GPT-4o-mini across the entire application.

What this means for you:

  • Blazing-fast responses – average latency is reduced by ~30 %.
  • Smarter code generation – the new models produce cleaner, more reliable code snippets and SQL queries.
  • Improved context handling – better at following long or complex instructions without losing track.

If you were previously using GPT-4o or GPT-4o-mini, no action is required—your existing settings now automatically point to GPT-4.1 equivalents.

Unlimited DeepSeek V3 for Professional Plan

We’ve also added first-class support for the flagship open-source model DeepSeek V3. **** All users on the Professional plan can now invoke DeepSeek V3 without any quota limits.

Why you’ll love DeepSeek V3:

  • Competitive performance on reasoning and coding tasks
  • Open-source transparency—inspect the weights and behavior
  • No usage caps for Professional subscribers

To start using DeepSeek V3, simply select it from the model dropdown in the chat or code interpreter panel.