Tips & Tricks
AIRGAP Studio usage tips
Using the AI Assistant
Give Clear Instructions
Being specific with your requests to the AI assistant yields more accurate results.
# Inefficient request
"Fix this code"
# Effective request
"Add a null check to the handleSubmit function and show a toast notification to the user when an error occurs"
Provide Context
The AI can better understand context when you have files open or code selected. If multiple files are relevant, reference them with @filename.
Use Auto Compact
Long conversations can exhaust the context window. AIRGAP Assistant has Auto Compact enabled by default, which automatically summarizes the conversation.
Design Workflow
Bridge to Designer Integration
- Generate design tokens in AIRGAP Bridge.
- The tokens are automatically injected into AIRGAP Designer's system prompt.
- When generating UI in Designer, you get consistent output with your design tokens applied.
Using UI Library Components
Search for components in the UI Library and use the Copy Code or Insert into Editor features.
Performance Optimization
GPU Acceleration
If you have an NVIDIA GPU, the AI engine will automatically use it. To verify GPU usage, run the following in the terminal:
ollama ps
If GPU appears in the PROCESSOR column, the GPU is being used correctly.
Adjusting Context Size
If AI responses are incomplete for complex tasks, check the context size. The default is 16,384 tokens and can be adjusted in the settings.
Disable Unused Extensions
Disabling unused extensions speeds up IDE startup. Select Extensions: Show Installed Extensions from the Command Palette (Ctrl+Shift+P).
Layout Setup
Recommended Dual Monitor Layout
| Monitor | Arrangement |
|---|---|
| Primary monitor | Code editor + terminal |
| Secondary monitor | AIRGAP Assistant (sidebar) + browser preview |
Recommended Single Monitor Layout
- Place the editor on the left and AIRGAP Assistant in the right sidebar.
- Toggle the file explorer with
Ctrl+Bto free up screen space. - Show the bottom panel (terminal) only when needed with
Ctrl+J.
Changing Models
To use a model other than the default Qwen3:8b, see Provider Configuration. In network-connected environments, external models such as Anthropic Claude and OpenAI GPT are also available.