Skip to main content

System Architecture

Understanding the complete Naas platform architecture and how all components work together.

High-Level Architecture

The Naas platform is built as a distributed system with clear separation between user interfaces, platform services, and the AI engine core.

Component Details

User Interfaces

🌐 Workspace (React Web App)

  • Repository: jupyter-naas/workspace
  • Purpose: Main user interface for AI interactions
  • Technology: React, TypeScript, modern web standards
  • Features:
    • Chat interface
    • Search
    • Team collaboration tools
    • File upload and processing
    • Analytics dashboard

🔧 Chrome Extension

  • Repository: jupyter-naas/chrome-extension
  • Purpose: Bring AI assistance to any website
  • Features:
    • Context-aware content analysis
    • Universal AI assistant overlay
    • Seamless authentication
    • Cross-site functionality

💻 ABI CLI (Local Development)

  • Repository: jupyter-naas/abi
  • Purpose: Local development and customization framework
  • Technology: Python, Docker, modern development tools
  • Key Benefits:
    • Complete offline capability
    • Full customization control
    • Open source transparency
    • Local data processing

🐍 Python SDK

  • Repository: jupyter-naas/naas-python
  • Purpose: Programmatic platform access
  • Features:
    • REST API client
    • CLI tools
    • Authentication management
    • Local development utilities

Platform Services

⚡ API Gateway (naas-api-ce)

  • Repository: jupyter-naas/naas-api-ce
  • Technology: FastAPI, PostgreSQL, Redis
  • Architecture: Domain-driven design with microservices
  • Features:
    • JWT-based authentication
    • Rate limiting and quotas
    • Request/response logging
    • API versioning
    • OpenAPI documentation

Core Domains:

  • Auth Domain: User authentication and authorization
  • Space Domain: Computational environment management
  • Registry Domain: Container and module management
  • Analytics Domain: Usage tracking and insights

💳 Credits API (Usage Management)

  • Repository: jupyter-naas/credits-api
  • Purpose: Billing, usage tracking, and resource management
  • Features:
    • Usage-based billing
    • Real-time cost tracking
    • Budget controls and alerts
    • Granular usage analytics

AI Engine (ABI Core)

🤖 Multi-Agent System

Built on LangGraph framework with stateful, multi-step workflows:

Core Architecture:

  • LangGraph StateGraph: Manages workflow between model and tools
  • Agent Nodes: Process messages using language models
  • Tool Nodes: Execute integration actions when requested
  • Checkpointing: Conversation history persistence via MemorySaver

Agent Categories:

  • Core Agents: ABI, ChatGPT, Claude, Gemini, Grok, Llama, Mistral, DeepSeek
  • Business Functions: Sales, Marketing, Finance, Operations
  • Technical: Development, DevOps, Data Analysis
  • Custom: User-created agents with specific capabilities

Agent Architecture: ABI agents are built on LangGraph with a standardized creation pattern:

# Real ABI agent implementation
from langchain_openai import ChatOpenAI
from abi.services.agent.Agent import Agent, AgentConfiguration, AgentSharedState, MemorySaver
from src.core.modules.naas.integrations import NaasIntegration

def create_agent(agent_shared_state=None, agent_configuration=None) -> Agent:
# Configure LLM model
model = ChatOpenAI(model="gpt-4o", temperature=0, api_key=secret.get('OPENAI_API_KEY'))

# Initialize tools and agents
tools = []
agents = []

# Add integration tools conditionally
if secret.get('NAAS_API_KEY'):
naas_config = NaasIntegrationConfiguration(api_key=secret.get('NAAS_API_KEY'))
tools += NaasIntegration.as_tools(naas_config)

# Set configuration and state
if agent_configuration is None:
agent_configuration = AgentConfiguration(system_prompt=SYSTEM_PROMPT)
if agent_shared_state is None:
agent_shared_state = AgentSharedState(thread_id=0)

return BusinessAgent(
name="Business Agent",
chat_model=model,
tools=tools,
agents=agents,
state=agent_shared_state,
configuration=agent_configuration,
memory=MemorySaver()
)

🧠 Ontology Engine

  • Technology: RDF/OWL ontologies with SPARQL queries
  • Purpose: Knowledge representation and reasoning
  • Features:
    • BFO-compliant ontology structure
    • Deterministic SPARQL queries
    • Semantic relationship modeling
    • Event-driven knowledge updates

🔗 Integration Framework

  • Purpose: Connect to external systems and data sources
  • Supported Integrations:
    • APIs: REST, GraphQL, SOAP
    • Databases: PostgreSQL, MySQL, MongoDB
    • Cloud Services: AWS, GCP, Azure
    • Business Tools: Salesforce, HubSpot, Slack
    • File Systems: Local, S3, Google Drive

💾 Storage Systems

  • Triple Store: Oxigraph for RDF knowledge graphs
  • Vector Store: Semantic search and embeddings
  • Object Storage: Unstructured data management
  • Cache Layer: Redis for performance optimization

External Integrations

🎯 LLM Providers

Multi-model support for flexibility and cost optimization:

  • OpenAI: GPT-4, GPT-3.5-turbo
  • Anthropic: Claude 3.5 Sonnet
  • Google: Gemini Pro
  • Meta: Llama models
  • Mistral: Mistral Large
  • Local: Ollama support for on-premises deployment

📊 Data Sources

Connect to virtually any data source:

  • Business Systems: CRM, ERP, HR systems
  • Cloud Storage: S3, Google Drive, Dropbox
  • Databases: SQL and NoSQL databases
  • APIs: Third-party service APIs
  • Files: CSV, JSON, PDF, Office documents

Data Flow Architecture

Request Processing Flow

Data Processing Pipeline

  1. Input Processing: Parse and validate user input
  2. Context Enrichment: Query ontology for relevant context
  3. Agent Selection: Route to appropriate specialized agent
  4. Tool Execution: Run integrations and external API calls
  5. AI Generation: Generate response using selected LLM
  6. Output Formatting: Structure response for user interface
  7. Knowledge Update: Store new facts in knowledge graph

Deployment Architectures

  • All platform services managed by Naas
  • ABI can run locally for customization
  • Best for: Most users, fastest time-to-value

2. Hybrid Deployment

  • Platform services in cloud
  • ABI and sensitive data on-premises
  • Best for: Regulated industries, data sovereignty

3. Fully Self-Hosted

  • All components on your infrastructure
  • Complete control and customization
  • Best for: Maximum security, custom requirements

4. Development Setup

# Open source ABI development
git clone https://github.com/jupyter-naas/abi.git
cd abi

# Setup environment
cp .env.example .env
# Add your AI model API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)

# Start ABI agent
make chat-abi-agent

Security Architecture

Authentication & Authorization

  • JWT tokens for stateless authentication
  • Role-based access control (RBAC)
  • OAuth integration with major providers
  • API key management for programmatic access

Data Protection

  • Encryption at rest for all stored data
  • TLS encryption for all network communication
  • Private cloud deployment options
  • Data residency controls for compliance

Privacy Controls

  • Local processing options with ABI CLI
  • Data retention policies and controls
  • Audit logging for compliance requirements
  • GDPR compliance features

Scalability & Performance

Horizontal Scaling

  • Microservices architecture enables independent scaling
  • Container orchestration with Kubernetes
  • Load balancing across multiple instances
  • Database sharding for high-volume workloads

Performance Optimization

  • Caching layers at multiple levels
  • Async processing for long-running tasks
  • Connection pooling for database efficiency
  • CDN delivery for static assets

Monitoring & Observability

  • Real-time metrics and alerting
  • Distributed tracing across services
  • Performance profiling and optimization
  • Usage analytics and insights

This architecture provides the foundation for understanding how to customize, integrate, and scale the Naas platform according to your specific needs.

Next Steps

  • Start Customizing: Install ABI CLI for local development
  • Integrate Systems: API Integration for system integration
  • Scale Up: Contact us for enterprise deployment planning