Intelligence Services (SEO & Media)
This manual details the specialized intelligence engines within PlugZero that handle external data collection, technical site audits, and social sentiment tracking.
🔍 1. SEO Intelligence Service (seo_intelligence)
The SEO service is a multi-layered engine designed to audit technical site health and track market search trends using custom crawlers.
A. Technical Site Audits
- Core Model:
SiteAuditorchestrates a batch of crawls against atarget_url. - Crawler Logic (
crawler.py): Uses asynchronous requests to navigate a site’s link graph up to amax_pageslimit. - Audit Findings:
AuditedPage: Stores performance metrics likeload_time,status_code, and asset counts.PageIssue: Categorizes findings intoerror,warning, ornotice.
- Task Scheduling: Managed via Celery Beat. Supports
WEEKLYorMONTHLYrecurrence.
B. Competitive Trend Analysis
- Keyword Tracking:
KeywordTargettracks specific search queries, assigning them an Intent (Informational, Transactional, etc.). - Competitive Scoring:
CompetitorDomaincomputes atechnical_score(0-100) by comparing your audit metrics against competitors.
📊 2. Media Intelligence Service (media_intelligence)
This service provides real-time “Social Listening” capabilities, tracking brand mentions and sentiment across the web.
A. Social Listening Engine
- Targets:
ListeningTargetdefines what to monitor (Hashtags, Handles, or Keywords) across Twitter/X, Reddit, and YouTube. - Mention Aggregation:
SocialMentionstores individual posts with engagement metrics (engagement_count,share_count).
B. Sentiment & Virality Detection
- Sentiment Analysis: Each mention is passed through an NLP pipeline to assign a
sentiment_score(-1.0 to 1.0) and a label (POSITIVE,NEGATIVE,NEUTRAL). - Virality Logic: The engine flags content as
is_viralbased on reach and engagement thresholds.
🔄 3. Asynchronous Execution Workflow
Both services rely on Celery to prevent long-running crawls from blocking the API.
- Request: User triggers an audit or mention-refresh via the API.
- Queue: The system dispatches a background task.
- Process: Celery workers execute the
ScrapyorRequestslogic. - Completion: The worker updates the status to
COMPLETEDand generates a report.
📂 4. Data Storage Standards
- Reports: Technical SEO reports are stored in
/media/audit_reports/. - Raw Data: Scraped social posts are stored in the
SocialMention.raw_dataJSONB field for future analysis.
Technical Tip: To add a new social platform, you must create a new “Adapter” in media_intelligence/crawler.py that normalizes the platform’s specific API response into our standard SocialMention format.
Last updated on