AIRGAP StudioAIRGAP Studio

System Requirements

System requirements for installing AIRGAP Studio

Hardware Requirements

ComponentMinimumRecommended
CPU64-bit x86 processorIntel Core i7 / AMD Ryzen 7 or higher
RAM16GB32GB
Disk Space10GB20GB SSD
Display1280x7201920x1080 or higher

GPU (Optional)

A GPU significantly improves AI model inference speed.

GPU TypeVRAMPerformance Level
NVIDIA (Vulkan)8GB+Optimal (30-45 tokens/s)
AMD Radeon (Vulkan)8GB+Good (25-40 tokens/s)
NVIDIA/AMD (Vulkan)6GBFunctional (with some limitations)
Intel iGPU (Vulkan)Basic (15-25 tokens/s)
CPU only-Basic (8-15 tokens/s)

AIRGAP Studio works fine without a GPU. In CPU mode, AI response generation will take longer.

GPU Compatibility

AIRGAP Studio's AI engine (llama-server) uses the Vulkan backend. GPU acceleration is available for any NVIDIA, AMD, or Intel GPU that supports Vulkan.

# Check NVIDIA GPU information
nvidia-smi

You can verify Vulkan support by updating your GPU drivers. Make sure you have the latest graphics drivers installed.

Operating System

OSSupportNotes
Windows 11 64-bitSupportedRecommended
Windows 10 64-bit (1903+)Supported
Windows 10 32-bitNot supported
Windows 8.1 or earlierNot supported
macOS / LinuxNot supportedWindows only

Software Requirements

Required

SoftwarePurposeNotes
OllamaLocal LLM serverIncluded in installer
Qwen3:8b modelAI inferenceInstalled separately via model pack

Optional

SoftwarePurposeNotes
GitCheckpoint featurePortable Git supported
Python 3.8+Jupyter notebooks, scriptsFor data analysis tasks
Node.js 18+MCP server executionWhen using MCP servers
Figma DesktopBridge prototyping featureRequires MCP plugin

Disk Space Breakdown

ComponentSize
AIRGAP Studio (IDE + extensions)~1GB
Ollama runtime~200MB
Qwen3:8b model~5GB
Workspace (cache, logs, etc.)~2-5GB
Total~8-12GB

Reserve additional disk space for project files and extra model installations.

Network (Optional)

AIRGAP Studio is designed for offline environments, so a network connection is not required.

FeatureNetwork Required
Core IDE featuresNo
Local AI (Ollama)No
External providers (Anthropic, OpenAI, etc.)Yes
Remote MCP serversYes
Extension installation (marketplace)Yes

Performance Optimization Tips

  • RAM: With 32GB or more, the IDE and Ollama run smoothly at the same time.
  • SSD: Model loading speeds are significantly faster compared to HDD.
  • GPU VRAM: With a Vulkan-capable GPU (NVIDIA/AMD) with 8GB+ VRAM, the Qwen3:8b model loads entirely into GPU memory.