prmpt-ar-be / README.md
Nikkon
Deploy PromptAR backend to HF Spaces
c840ad0
metadata
title: PromptAR Backend API
emoji: 🎨
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
app_port: 7860

PromptAR Backend API

FastAPI backend for generating 3D models from text prompts using AI, optimized for AR applications.

Features

  • 🎨 Text-to-3D Generation: Generate 3D models from text prompts using TRELLIS and Shap-E
  • πŸš€ Two Generation Modes:
    • Advanced Mode (TRELLIS): High-quality textured models
    • Basic Mode (Shap-E): Fast generation with basic geometry
  • πŸ“¦ GLB Format: Direct export to GLB format optimized for AR
  • πŸ”§ AR-Optimized: Automatic brightness normalization for better AR visibility
  • πŸ“Š Request Logging: Built-in database for tracking API requests
  • 🌐 CORS Enabled: Ready for cross-origin requests from mobile and web apps

API Endpoints

🏠 Root Endpoints

  • GET / - API information and status
  • GET /health - Health check endpoint

🎨 Model Generation

  • POST /api/models/generate - Generate a 3D model from text

    {
      "prompt": "wooden chair",
      "mode": "advanced"
    }
    
  • GET /api/models/download/{model_id} - Download generated model

Usage

API Documentation

Once deployed, visit:

  • Interactive docs: https://your-space-url.hf.space/docs
  • Alternative docs: https://your-space-url.hf.space/redoc

Example Request

curl -X POST "https://your-space-url.hf.space/api/models/generate" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "a red sports car", "mode": "advanced"}'

Response

{
  "status": "success",
  "message": "3D model generated successfully using advanced mode",
  "model_id": "abc123-456def-789ghi",
  "download_url": "/api/models/download/abc123-456def-789ghi"
}

Configuration

The backend uses environment variables for configuration. In Hugging Face Spaces, set these in the Settings > Repository Secrets:

Required

  • HF_TOKEN - Your Hugging Face API token (get it from Settings)

Optional

  • ALLOWED_ORIGINS - CORS allowed origins (default: "*")
  • MODEL_STORAGE_PATH - Path for storing models (default: "./models")

Architecture

The backend is built with:

  • FastAPI: Modern Python web framework
  • Gradio Client: Integration with HF Spaces (TRELLIS, Shap-E)
  • Pydantic: Data validation
  • SQLite: Request logging database
  • pygltflib: 3D model processing

Project Structure

backend/
β”œβ”€β”€ app/                    # Application factory
β”‚   └── app.py             # FastAPI app creation
β”œβ”€β”€ routers/               # API route handlers
β”‚   β”œβ”€β”€ root.py           # Root endpoints
β”‚   └── models.py         # Model generation endpoints
β”œβ”€β”€ services/             # Business logic
β”‚   β”œβ”€β”€ huggingface_service.py   # AI model integration
β”‚   β”œβ”€β”€ storage_service.py       # Model storage
β”‚   β”œβ”€β”€ ar_material_service.py   # AR optimization
β”‚   └── database_service.py      # Request logging
β”œβ”€β”€ schemas/              # Request/response models
β”œβ”€β”€ middleware/           # Custom middleware
β”œβ”€β”€ utils/               # Utilities
β”œβ”€β”€ config.py            # Configuration
└── main.py             # Entry point

3D Model Generation

Advanced Mode (TRELLIS)

  • High-quality textured 3D models
  • ~10-30 seconds generation time
  • Uses Microsoft's TRELLIS model
  • Optimized for AR applications

Basic Mode (Shap-E)

  • Fast generation
  • ~5-10 seconds generation time
  • Uses OpenAI's Shap-E model
  • Basic geometry without textures

Development

To run locally:

# Install dependencies
pip install -r requirements.txt

# Set environment variables
export HF_TOKEN=your_token_here

# Run the server
python main.py

Visit http://localhost:8000/docs for API documentation.

License

MIT License - See LICENSE file for details

Links