Skip to Content
User ManualsIntelligence Services

Intelligence Services (SEO & Media)

This manual details the specialized intelligence engines within PlugZero that handle external data collection, technical site audits, and social sentiment tracking.


🔍 1. SEO Intelligence Service (seo_intelligence)

The SEO service is a multi-layered engine designed to audit technical site health and track market search trends using custom crawlers.

A. Technical Site Audits

  • Core Model: SiteAudit orchestrates a batch of crawls against a target_url.
  • Crawler Logic (crawler.py): Uses asynchronous requests to navigate a site’s link graph up to a max_pages limit.
  • Audit Findings:
    • AuditedPage: Stores performance metrics like load_time, status_code, and asset counts.
    • PageIssue: Categorizes findings into error, warning, or notice.
  • Task Scheduling: Managed via Celery Beat. Supports WEEKLY or MONTHLY recurrence.

B. Competitive Trend Analysis

  • Keyword Tracking: KeywordTarget tracks specific search queries, assigning them an Intent (Informational, Transactional, etc.).
  • Competitive Scoring: CompetitorDomain computes a technical_score (0-100) by comparing your audit metrics against competitors.

📊 2. Media Intelligence Service (media_intelligence)

This service provides real-time “Social Listening” capabilities, tracking brand mentions and sentiment across the web.

A. Social Listening Engine

  • Targets: ListeningTarget defines what to monitor (Hashtags, Handles, or Keywords) across Twitter/X, Reddit, and YouTube.
  • Mention Aggregation: SocialMention stores individual posts with engagement metrics (engagement_count, share_count).

B. Sentiment & Virality Detection

  • Sentiment Analysis: Each mention is passed through an NLP pipeline to assign a sentiment_score (-1.0 to 1.0) and a label (POSITIVE, NEGATIVE, NEUTRAL).
  • Virality Logic: The engine flags content as is_viral based on reach and engagement thresholds.

🔄 3. Asynchronous Execution Workflow

Both services rely on Celery to prevent long-running crawls from blocking the API.

  1. Request: User triggers an audit or mention-refresh via the API.
  2. Queue: The system dispatches a background task.
  3. Process: Celery workers execute the Scrapy or Requests logic.
  4. Completion: The worker updates the status to COMPLETED and generates a report.

📂 4. Data Storage Standards

  • Reports: Technical SEO reports are stored in /media/audit_reports/.
  • Raw Data: Scraped social posts are stored in the SocialMention.raw_data JSONB field for future analysis.

Technical Tip: To add a new social platform, you must create a new “Adapter” in media_intelligence/crawler.py that normalizes the platform’s specific API response into our standard SocialMention format.


Last updated on