Master Prompt · v1.0 — by MubashirHassan.com
Complete Semantic
Topical Map
19-STEP PROCESS · 8-TAB EXCEL SYSTEM

Build a complete Semantic Topical Authority Map for any website. Based on the Koray + Mubashir methodology — 19 research steps, 8-tab Excel system, and macro semantic output ready for your content team.

Scroll to explore

One Prompt. Complete Architecture.

The Topical Map Master Prompt covers every step of real topical map creation — from source context selection to writing macro semantic components.

It guides you through the full 19-step process: source context, central entity, buyer personas, ontologies & taxonomies, authority hacking, outcomes research, lexical semantics, query templates, SERP-based clustering, attribute filtration (relevance → prominence → popularity), and final processed topical map with macro semantic output.

19 Research Steps
8 Excel Tabs
38 Writing Rules
200+ Geo Entries

Wait — What Even Is a Topical Map?

Before the 19 steps and technical jargon — here's the whole thing explained simply. If you've ever wondered why some websites show up on Google for everything and yours barely shows up for anything, this is the answer.

🧠 The Core Idea

Google rewards experts, not bloggers

Think of Google like a teacher grading homework. One essay about "shoes" doesn't make you a shoe expert. But if you write 80 articles covering every angle of shoes — brands, types, sizes, care, history, buying guides — Google says: "This website really knows shoes." That's topical authority. A topical map is your master plan for becoming that expert in your niche.

🗺️ What a Topical Map Is

A blueprint for every article you'll ever write

Imagine opening an online store selling running shoes. A topical map tells you: write about Nike vs. Adidas, running shoe types, best shoes under Rs. 5,000, how to clean shoes, shoe size guides, and 70 more topics — all connected to each other like a spider web. It's not random. Every article has a role and links to other articles so Google sees your full expertise.

💰 Why It Matters for Sales

More authority = more traffic = more customers

When Google trusts your site as a true expert, it ranks your pages higher — including the ones that sell your product. A random blog post might rank for one keyword. A proper topical map makes your whole site rank for hundreds of keywords, all pointing back to the same goal: getting customers to buy from you, hire you, or contact you.

Step 1

Figure out how your business makes money. Not just what you sell — but the exact action that puts money in your pocket. An online clothing store makes money when someone buys a shirt. A digital agency makes money when a client signs a retainer. This is called your Source Context — it's the compass that decides which topics are actually worth writing about.

Step 2

Pick your one main topic — your Central Entity. This is the single thing your whole website is about. For a shoe store it's "Shoes." Not "best shoes" or "cheap shoes" — just "Shoes." Everything on your site connects back to this. Google needs to see one clear theme across your entire website, not 10 random ones.

Step 3

Understand why people search — not just what they search. Some people Google "what are running shoes" (they're learning). Others search "best running shoes under 5000 PKR" (they're comparing). Others type "buy Nike running shoes Lahore" (they're ready to purchase). You need content for all three types of people — that's ToFu, MoFu, and BoFu content in SEO terms.

Step 4

Go deep on the topics closest to your sales (Core Section). These are articles directly connected to your product or service — reviews, buying guides, brand comparisons, pricing pages. Think of this like the inside of your store. You go very deep here: every size, every brand, every price range. This content is what converts visitors into customers.

Step 5

Go wide on helpful topics that build trust (Outer Section). These are informational articles that don't sell directly, but prove you know what you're talking about — "How to clean running shoes," "History of Nike," "Difference between running and walking shoes." They bring in early-stage visitors and tell Google you're a real expert, not just a sales page with 5 articles.

Step 6

Study the websites already winning in your niche. Find 3–5 sites Google already trusts for your topic. Look at how many articles they have, which topics they cover, and what's ranking well for them. You're not copying them — you're doing homework so you don't miss any important topics and you know exactly what you need to beat them.

Step 7

Build your list of 90–120 articles, then publish in the right order. Once you have your full topic list, you don't publish randomly. You start with the most important articles (1st batch: 20–50 pages), then expand (2nd batch), then fill in the gaps (3rd batch). It's like building a house — foundation first, then walls, then decoration. Publishing in waves signals to Google that you're growing intentionally.

Now that you understand the big picture — the 19 steps below are just the detailed, professional version of exactly what you just read. Each step has specific tools, outputs, and methods.

See the Full 19-Step Process ↓

The 19-Step Topical Map Process

Every step distilled — the complete methodology from blank page to processed topical map

Phase 1 — Foundation (Steps 1–6)

STEP 01

Define Source Context

Identify the primary monetization method of the business — how it earns money. This anchors every topic decision and justifies the site's existence in the SERP. Check the About Us page, homepage title tag, and solutions page. One site can have multiple source contexts.

STEP 02

Choose Central Entity

The single entity your entire topical map is built around. Verify it exists in Wikipedia, Wikidata, or Google's Knowledge Panel. Use Ahrefs terms-match to confirm query patterns. Central entity defines what behaviors and attributes the site covers sitewide. One topical map = one central entity.

STEP 03

Define Central Search Intent

Unification of source context + central entity. Identifies the dominant verbs and behaviors of users — learn, compare, find, buy, navigate. Determines what the website helps users accomplish at each stage of their search journey (ToFu → MoFu → BoFu).

STEP 03.1

Research Buyer Personas & ICPs

Identify 5–8 buyer personas based on the central search intent. For B2C: buyer personas. For B2B: ICPs first, then personas by role. Use LLMs (ChatGPT, Claude, Gemini) with the source context to generate personas and their top behaviors. Prioritize personas closest to the source context.

STEP 04

Define Core Section

The main attribute of the central entity that directly connects to the source context. Go deep into this attribute: cover every subcategory, brand, type, deal, and buying guide. All internal links flow from outer section to core section. Core = bottom-of-funnel, revenue-driving content.

STEP 05

Define Outer Section

All prominent attributes of the central entity that cannot be skipped but aren't directly core. Go wide: features, news, how-to guides, accessories, history, comparisons. Outer section captures top-of-funnel traffic and connects to core via internal links. Stop expanding when you diverge from the central entity.

Phase 2 — Research (Steps 6–11)

STEP 06

Identify Authority Sources

Find the 3–5 sites dominating the knowledge domain. Use Google (search seed query + see who dominates), Ahrefs Site Explorer (organic keywords report filtered by entity), and site operator searches. Prioritize topical coverage over DR/backlinks. Log: domain, total pages, entity-specific pages, entity-specific traffic, DR.

STEP 06.1

Select Reverse-Engineering Target

Identify one source to reverse-engineer and eventually replace: low DR, recent domain (1–3 years), high traffic on relevant topics, efficient page-to-traffic ratio. This is your benchmark for the next core algorithm update. Must be close to your source context, not a generic authority site.

STEP 07

Map Ontologies & Taxonomies

Use LLM prompt: "For [Central Entity], develop ontology covering main concept entities, taxonomy (categories → subcategories → instances), properties and relationships." Run on ChatGPT, Claude, and Gemini for coverage. Also check Wikipedia categories page for the entity to find derived outcomes and entity types.

STEP 08

Finalize Outcomes (Derived Entities)

Identify all derived entities (outcomes) of the central entity — product names, brands, models, types, subtypes. Sources: Wikipedia Blue Links + Categories, Wikidata properties, LLM entity lists, Ahrefs terms-match exports, competitor engrams. Prioritize outcomes most relevant to source context. Each important outcome will get its own query template research.

STEP 09

Gather All Data (Multi-Source)

Extract queries from: (1) Google — search suggestions, refinement tabs, PAA, related searches, image search tabs, people also search, bold SERP terms; (2) SEO Search Keyword Tool (SKT) — autocomplete expansion across all search engines; (3) Ahrefs — terms-match export for central entity + each key derived entity; (4) GSC — query contains [entity] AND page contains [entity] filters; (5) Wikipedia / Wikidata / Wikidata Graph; (6) Competitor sitemaps + Ahrefs Top Pages (filtered by entity keyword).

STEP 09.1

Extract Competitor Engrams

Crawl authority sources with Screaming Frog. Extract unigrams, bigrams, trigrams from: page titles, H1, H2–H6, body text, internal link anchor texts, image alt tags, image file names. Identify top recurring terms across HTML components. Use to confirm outcomes and find topics you may have missed. Authority Hacking = reverse-engineering their topical coverage.

Phase 3 — Query Intelligence (Steps 10–11)

STEP 10

Identify Query Templates

A query template is a search pattern that can be applied to multiple outcomes at the same taxonomy level. Example: "best [Brand] [Entity] under $[Price]" — swap Brand and Price across all outcomes. Identify from: Google autocomplete patterns, Ahrefs modifiers column, SKT results. Separate nouns/outcomes from templates in your sheet. Key criteria: if you can swap a derived entity into the pattern and it still makes sense, it's a template.

STEP 10.1

Map Lexical Semantic Relations

For the central entity and key derived entities, map: Hypernyms (broader class), Hyponyms (narrower types), Meronyms (parts/components), Synonyms, Antonyms, Root Attributes (present in all entities of the same class). Sources: Wikipedia sections, Wikidata, LLM prompt: "Give all lexical relations for [Entity]: hypernyms, hyponyms, meronyms, synonyms." These relations feed outer section topics and root document coverage.

STEP 11

Assign Search Demand

Export all gathered queries to Ahrefs Keyword Explorer in bulk (paste up to 10,000 at once) to retrieve search volume. Add column: Search Demand. For authority hacking queries, volume is already attached via Ahrefs Top Pages export. Add: Source column (query semantics / lexical semantics / authority hacking / GSC historical / LLM), Source Platform (Google / Ahrefs / GSC / Wikipedia / LLM).

Phase 4 — Clustering & Filtration (Steps 12–14)

STEP 12

SERP-Based Keyword Clustering

Run all queries through a SERP clustering tool (Keyword Insights, SEO Utilities / suites.co, or similar). Set overlap threshold: 3–4 common SERP URLs = same page. Cluster before filtering to save time. Note SERP overlap percentage for borderline topics. Non-clustered queries still deserve individual pages. Check top-3 position overlap — top positions matter more than bottom.

STEP 13

Attribute Filtration

Remove topics in this exact order: (1) Relevance — remove anything not relevant to source context; (2) Prominence — remove attributes where the entity could exist without them (German league is NOT prominent for Germany; population IS); (3) Popularity — deprioritize if no meaningful search demand exists, unless prominent. Do after clustering: remove entire topic clusters, not individual queries.

STEP 14

Categorize & Prioritize Topics

Assign to each parent topic: Core or Outer label; Attribute category (design, features, price, etc.); Page type (definition, listicle, comparison, how-to, review, buying guide, news); Sales funnel stage (ToFu / MoFu / BoFu); Product/outcome it belongs to (for e-commerce). Assign prominence score (H/M/L) and popularity (search demand). Order topics by: source context relevance → prominence → search demand.

Phase 5 — Finalize & Build (Steps 15–19)

STEP 15

Build Raw Topical Map

Final ordered list of topics with: Topic (parent query), Child topics (from clustering), Search demand, Competing document URL (optional), Source, Source Platform, Core/Outer, Attribute, Page Type, Funnel Stage, Prominence score, Node type (Root / Seed / Quality / Typical). Order similar topics together — semantic proximity matters for publishing momentum.

STEP 16

Write Macro Semantic Components

For each topic in the raw map, produce: (1) Page Title tag (include primary entity + adjective/year/location/angle after colon — max 60 chars); (2) URL slug (hierarchical, lowercase, hyphens — entity taxonomy path, no stop words); (3) Featured Image URL (use a descriptive placeholder); (4) Image Alt Text (entity + attribute + context); (5) Meta Description (summarize title + child topics + key engrams — reflects page structure). Use competitor SERP snippets and H2 tables of contents as input.

STEP 17

Assign Publishing Momentum

Label each page: 1st Go (launch batch — 20–50 pages, core quality nodes first), 2nd Go (second wave — expand outer section + core depth), 3rd Go (long-tail and low-priority topics). Publishing frequency must match or exceed your reverse-engineering target competitor's frequency for the 2–3 weeks before each broad core algorithm update.

STEP 18

Node Type Classification

Classify every page as: Root Node (the foundational "what is [entity]" page — receives most internal links); Seed Node (category-level representative pages per outcome/attribute); Quality Node (the most important pages closest to source context — e.g., reviews, buying guides, brand pages); Typical Node (standard informational pages, how-tos, outer section). Route internal links from Typical → Seed → Quality → Root.

STEP 19

Depth, Vastness & Momentum Decision

Choose your strategy: Deep + Momentum (recommended for new sites — dominate a narrow attribute set, publish fast); Vast + Momentum (cover all outer section broadly, high budget); Deep + Vast (ideal but requires resources). Cannot go vast without momentum. Going deep = covering every derived entity and sub-entity of your core section. Going vast = covering all prominent outer attributes broadly. Always track topical coverage ratio vs. your benchmark competitor.

Key insights: Your next topical map will always be better than your previous one. The first time is hardest — brain reflexes for connecting topics develop with practice. Don't wait for perfection; create your first map, publish, gather historical data (GSC), and use that data to reconfigure and improve. Content configuration at 6–9 month intervals is where the compound gains come from.

The 8-Tab Excel Architecture

Every tab maps to a specific phase of the 19-step process

TAB 1

Raw Research Layer

Multi-source SERP + competitive intelligence. 19 columns: source context, central entity, ontology, frame semantics, Wikipedia anchors, buyer personas, competitor rankings, Google suggestions, PAA questions, SERP bold terms, image results, entity type, LLM associations, query source, and search demand.

19 Columns A–S Min 60 rows Yellow header
TAB 2

Query Templates & Outcomes

Extracted query templates and derived outcomes (entities) from the research layer. Columns: Query Template pattern, Outcome entity it applies to, Example expanded query, Sales funnel stage, Page type it generates, Estimated pages from this template.

6 Columns A–F All templates mapped Blue header
TAB 3

Lexical & Entity Relations

Full lexical semantic map for the central entity and top 5 derived entities. Columns: Entity, Relation type (hypernym / hyponym / meronym / synonym / root attribute), Related term, Wikipedia / Wikidata source, Suggested page type, Core or Outer assignment.

6 Columns A–F All lexical relations Green header
TAB 4

Final Processed Topical Map

The publishing-ready content plan. Every page gets: topic, child topics, search demand, competing URL, source, core/outer, attribute category, page type, funnel stage, node type (Root/Seed/Quality/Typical), prominence (H/M/L), publishing momentum (1st/2nd/3rd Go).

14 Columns A–N 90–120 pages Orange header
TAB 5

Macro Semantic Components

For every page in Tab 4: Page Title (max 60 chars), URL slug (hierarchical, hyphenated), Image URL placeholder, Image alt text, Meta description, plus N-grams & entities (15+ entities and 15+ attributes per page for semantic completeness).

6 core cols + entity rows 10 priority pages full Purple header
TAB 6

Content Brief System

10 full article briefs for priority pages. Exact H1, H2s, H3s with methodology notes per section, word count targets, internal anchor text, source context alignment notes, and micro semantic writing instructions based on competitor SERP analysis.

10 Full Briefs Dark blue header
TAB 7

38 Semantic Writing Rules

Complete Koray Tugberk GUBUR semantic writing rule set with niche-specific examples. EVA Model, heading vector logic, entity declarations, contextual completeness, sentence-level semantic precision. The author rulebook for ranking in an entity-first search era.

38 Rules Orange-red header
TAB 8

Locations Topical Map

Full Pakistan geographic entity coverage. 7 provinces, 35 divisions, 55+ cities, 18+ towns, neighborhood-level data for Lahore/Karachi/Islamabad, 35+ tehsils, 20 region types, 25+ urban/rural area types. Geo-modified query templates for each location tier.

200+ Geo Entries Teal header

Business Details You'll Fill In

Replace each bracketed field with your real data — the more specific, the more precise the output

Required

Website URL + Brand Name

Your domain and full brand name as it appears on-site. Used for entity declarations, schema signals, and naming the output file.

Required

Source Context

How the business earns money — the monetization method. E.g., "sell mobile phone cases online" or "provide visa consultancy services." Anchors every topic decision.

Required

Central Entity

The single entity your entire topical map is centered around — verified in Wikipedia or Google Knowledge Panel. E.g., "Mobile Phone" (not "Mobile Phone Reviews").

Required

Central Search Intent

The dominant verb(s) describing user behavior — find & compare, buy, learn, repair. Determines Core vs. Outer section split and content type distribution.

Required

Top 5 Derived Outcomes

The most important derived entities — product names, brands, types (e.g., iPhone, Samsung Galaxy, budget phones, flagship phones). Each gets its own query template research.

Required

Target Country + City

Country for keyword research (sets search volumes) and primary service city (drives neighborhood-level Tab 8 geographic coverage).

Recommended

3 Authority Sources

Name + URL for each. Tab 4 performs authority hacking — extracting their topical coverage for engram analysis and gap identification.

Recommended

Reverse-Engineering Target

One competitor: lower DR, newer domain, high traffic on relevant topics. This is the site you want to replace at the next broad core algorithm update.

Recommended

Buyer Personas (2–3)

Who visits your site — tech enthusiast, budget buyer, brand loyalist, casual buyer. Used to prioritize query templates and content angles closest to your source context.

What's New

Major additions — all based on the Mubashir Hassan + Koray Tugberk GUBUR's methodology

New

Complete 19-Step Process Prompt

The entire research process is now embedded into the prompt — from source context through to macro semantic components. Previously the prompt jumped straight to Excel tab specs; now the AI is instructed to reason through each step before populating the workbook, dramatically improving output quality and alignment with the actual methodology.

New — Tab 2

Query Templates & Outcomes Tab (replaces Process Topical Map)

The old 3-column Process Topical Map tab is replaced with a dedicated Query Templates & Outcomes tab. This directly maps the webinar's core teaching: identifying scalable query templates (e.g., "[Brand] [Entity] review", "[Entity] under $[Price]") and the derived outcome entities they apply to, making the template→outcome→page scaling logic explicit.

New — Tab 3

Lexical & Entity Relations Tab

New dedicated tab for hypernym / hyponym / meronym / synonym / root attribute mapping — the webinar emphasized these lexical semantic relations as essential for understanding what to cover in outer section topics and root documents. Sources: Wikipedia, Wikidata, LLM prompts.

Improved — Tab 4

Final Topical Map Expanded to 14 Columns

Old Tab 3 had 9 columns. New Tab 4 adds: Competing URL, Source, Source Platform, Funnel Stage, Node Type (Root/Seed/Quality/Typical), and Prominence score (H/M/L). Node type classification was a key webinar concept — distinguishing root documents, seed quality nodes, and typical nodes for internal link architecture.

Improved — Tab 5

Macro Semantic Components Separated from Entity Mapping

Previously combined as "N-Grams & Entities." Now Tab 5 explicitly covers all macro semantic components per page (title, URL, image URL, image alt, meta description) plus entity/attribute coverage — aligning with the webinar's explanation of what a "processed topical map" actually contains vs. what goes into a content brief.

New Prompt Section

Attribute Filtration Instructions (Relevance → Prominence → Popularity)

The prompt now explicitly instructs the AI to apply attribute filtration in the correct order: relevance first (does this belong given the source context?), then prominence (does the entity need this attribute to exist?), then popularity (is there search demand?). This was one of the most important distinctions from the webinar.

New Prompt Section

Publishing Depth, Vastness & Momentum Framework

Prompt now includes a dedicated section for the strategic publishing decision: Deep + Momentum (recommended for new sites), Vast + Momentum (high-budget), or Deep + Vast. Instructs the AI to assign 1st/2nd/3rd Go labels with the rationale tied to the reverse-engineering target's publishing frequency.

The Master Prompt

Fill in your business details below, copy the prompt, paste into Claude (recommended) or ChatGPT

Quick Steps:

  1. Fill in your business details in Section A
  2. Copy the entire prompt
  3. Paste into Claude or ChatGPT with code execution
  4. Receive your complete 8-tab Excel topical map
TOPICAL_MAP_MASTER_PROMPT
TOPICAL MAP MASTER PROMPT — MUBASHIR HASSAN & KORAY TUGBERK GUBUR METHODOLOGY ⚠ IMPORTANT — AI ASSISTANT: Before doing anything else, read the block below. Human user: fill in your real details in Section A, then copy. ════════════════════════════════════════════════════════════════ You are an expert Semantic SEO strategist applying Mubashir Hassan + Koray Tugberk GUBUR's topical authority methodology. Build a complete Semantic Topical Authority Map for the business below. Output a single Excel (.xlsx) file with exactly 8 tabs as specified. Read ALL instructions before writing any code. Follow the 19-step reasoning process before populating the workbook. ──────────────────────────────────────────────────────────────── ⚡ AI ASSISTANT — READ THIS FIRST (before anything else) ──────────────────────────────────────────────────────────────── CHECK: Look at Section A below. If any field still contains placeholder text inside square brackets [ ] — for example "[e.g., MyStore.com]" — that means the user has NOT filled in their real business details yet. If placeholders are still present, DO NOT proceed to Section B or generate any Excel file. Instead, say exactly this to the user: "Before I build your topical map, I need a few quick details about your business. Let's go one by one: 1. What is your website URL? (e.g., mystore.com) 2. What is your brand name? 3. What industry or niche are you in? (e.g., ecommerce clothing, SEO services, online courses) 4. Which country or region are you targeting? 5. What is your primary city? 6. How does your business make money? For example: selling products online, offering services, running paid courses, etc. Once you share these, I will fill in the details and build your complete 8-tab Semantic Topical Authority Map." Only proceed to Section B and the workbook generation AFTER the user has confirmed all six fields above. ──────────────────────────────────────────────────────────────── SECTION A — MY BUSINESS DETAILS ──────────────────────────────────────────────────────────────── Website URL: [e.g., MubashirHassan.com] Brand Name: [e.g., Mubashir Hassan SEO Services] Industry / Niche: [e.g., Digital Marketing] Country / Region: [e.g., Pakistan] Primary City: [e.g., Islamabad] SOURCE CONTEXT (how the business earns money): [e.g., "Sell SEO services and digital marketing packages to small and medium businesses in Pakistan — earning through monthly retainer fees, one-time project payments, and online course sales."] CENTRAL ENTITY (Wikipedia-verifiable entity your entire map is built around): [e.g., "Search Engine Optimization"] Verification source: [e.g., Wikipedia / Google Knowledge Panel / Wikidata] CENTRAL SEARCH INTENT (dominant user behaviors — verbs): [e.g., "Learn → Compare → Hire an SEO agency in Pakistan"] ToFu verbs: [e.g., learn, understand, discover, what is] MoFu verbs: [e.g., compare, evaluate, find, best] BoFu verbs: [e.g., hire, buy, contact, get quote, book] TOP 5 DERIVED OUTCOMES / KEY ENTITIES (most important derived entities — prioritized by source context): 1. [e.g., On-Page SEO] 2. [e.g., Link Building] 3. [e.g., Technical SEO] 4. [e.g., Local SEO] 5. [e.g., SEO Audit] BUYER PERSONAS (3 most relevant to source context): Persona 1: [e.g., Small business owner in Islamabad wanting more website traffic and leads] Persona 2: [e.g., E-commerce store owner looking to scale sales through organic search] Persona 3: [e.g., Startup founder needing a full SEO strategy and content plan from scratch] CORE SECTION (main attribute closest to source context — go DEEP here): [e.g., "SEO services — on-page optimization, technical SEO, link building, local SEO, SEO audits, and keyword research for Pakistani businesses across all industries"] OUTER SECTION (prominent attributes of the central entity — go WIDE here): [e.g., "What is SEO, how search engines work, Google algorithm updates, content marketing, website speed, analytics, competitor analysis, social signals, digital marketing tools, blogging guides"] AUTHORITY SOURCES (3–5 sites dominating your knowledge domain): 1. [Name — URL — Reason: e.g., "highest topical coverage on SEO in Pakistan"] 2. [Name — URL — Reason] 3. [Name — URL — Reason] 4. [Name — URL — Reason] 5. [Name — URL — Reason] 6. [Name — URL — Reason] REVERSE-ENGINEERING TARGET (1 site to eventually outrank — newer, lower DR, efficient): [Name — URL — DR approx — Est. relevant pages — Why chosen] COMPETITORS FOR GAP ANALYSIS: 1. [Name + URL] 2. [Name + URL] 3. [Name + URL] 4. [Name + URL] 5. [Name + URL] 6. [Name + URL] LOCAL PAYMENT METHODS: [e.g., JazzCash, EasyPaisa, HBL / Meezan bank transfer, Visa Card, Mastercard, Cash on Delivery] REGULATORY BODIES: [e.g., PTA, SECP, FBR, PEMRA — whichever applies to your industry] GEOGRAPHIC FOCUS: [e.g., All Pakistan provinces + all divisions + all major cities — primary: Islamabad] ──────────────────────────────────────────────────────────────── SECTION B — 19-STEP REASONING PROCESS ──────────────────────────────────────────────────────────────── Before generating the Excel file, reason through each step. Write a brief (2–4 sentence) internal reasoning note per step — this becomes your "topical map brief" that ensures the workbook is coherent. STEP 1: Source Context Validation Confirm the source context clearly states HOW the business monetizes. The source context must define the purpose of the brand and justify its existence in the SERP. If unclear, derive it from: About Us page, homepage title tag, primary service/product description. → Output: Confirmed source context statement. STEP 2: Central Entity Confirmation The central entity is the thing every topic on the site discusses. It must be: (a) uniquely identifiable, (b) present in Wikipedia or Knowledge Graph, (c) the entity whose attributes and behaviors define your content sitewide. Verify it is NOT the source context itself (e.g., the entity is "Mobile Phone", not "Mobile Phone Reviews"). → Output: Central entity name + Wikipedia/Wikidata confirmation. STEP 3: Central Search Intent Mapping Map the dominant user journey: what verbs describe what users DO with this entity? Divide into ToFu / MoFu / BoFu verb sets. The intersection of these verbs and the source context defines what gets covered in Core vs. Outer. → Output: Intent statement + verb map by funnel stage. STEP 3.1: Buyer Persona Prioritization From the 3 personas provided, rank them by proximity to source context. The persona closest to the BoFu action (hire/buy/contact) gets priority in the Core section. Their top behaviors and search patterns define which query templates to prioritize. → Output: Ranked personas with top 3 behaviors each. STEP 4: Core Section Scoping The Core section = the main attribute of the central entity that directly connects to the source context. List the top-level subcategories of the core section: service types, outcome types, entity types to cover at depth. Every Core section topic must map directly to a revenue-generating action or query. → Output: Core section outline with 5–8 subcategories. STEP 5: Outer Section Scoping The Outer section = all prominent attributes of the central entity that are NOT the core attribute but cannot be excluded without reducing topical authority. Map prominent attributes using lexical relations (hypernyms, hyponyms, meronyms) and Wikipedia section headings for the central entity. Stop when topics diverge from the central entity. → Output: Outer section outline with 5–8 attribute groups. STEP 6: Authority Sources Assessment For each authority source provided: estimate their entity-specific page count and traffic. Identify which source has the most efficient topical coverage (most traffic per relevant page). Mark the reverse-engineering target. Note: you are not trying to copy them — you are using their coverage to verify your outcomes and identify gaps. → Output: Authority source comparison table (name / relevant pages / traffic estimate / DR / notes). STEP 7: Ontology & Taxonomy Mapping Using the central entity, map the complete ontology: main concept entities → taxonomy categories → subcategory instances → key properties → relationships. Use LLM knowledge to approximate what Wikipedia categories and Wikidata properties show for this entity. This becomes the backbone of the outcomes list. → Output: 3-level ontology tree for the central entity. STEP 8: Outcomes (Derived Entities) List From the ontology, extract all derived outcome entities most relevant to the source context. Organize them into taxonomy levels: Level 1 (top-level category, e.g., Brand), Level 2 (subcategory, e.g., Apple), Level 3 (instance, e.g., iPhone 15 Pro Max). For each important outcome entity, note: whether it needs its own query template research, its priority (H/M/L) based on source context relevance. → Output: Outcomes table with taxonomy level + priority. STEP 9: Query Data Sources Summary Specify which of the following sources were used to gather queries, and what they contributed: (a) Google search suggestions + autocomplete (b) SEO Search Keyword Tool (SKT) — all search engines autocomplete (c) Ahrefs terms-match — central entity + key derived entities (d) Google Search Console — query&page filters (if existing site) (e) Wikipedia Blue Links, categories, section headings (f) Wikidata properties and entity classes (g) LLM entity + relation prompts (h) Authority source sitemap + Ahrefs Top Pages (authority hacking) (i) Competitor engrams from Screaming Frog crawl → Output: Checklist of data sources used + key findings from each. STEP 9.1: Lexical Semantic Relations For the central entity and the top 3 derived outcomes, identify all lexical relations: Hypernyms (what broader category does this belong to?) Hyponyms (what types/subtypes does this have?) Meronyms (what parts/components does it have?) Synonyms (alternative names in search) Root attributes (attributes present in ALL entities of this class) Prominent attributes (attributes without which this entity cannot be defined) → Output: Lexical relation table — this feeds Tab 3 and outer section scoping. STEP 10: Query Templates Identification From all gathered queries, identify scalable query templates — patterns where you can substitute derived entity names and generate new valid queries. Format: "[Modifier] + [Entity/Outcome] + [Modifier]". For each template note: (a) which taxonomy level it applies to, (b) how many pages it could generate, (c) funnel stage it belongs to. Min 10 query templates. → Output: Query template table with taxonomy level, funnel stage, and page count estimate. STEP 11: Search Demand Assignment Logic Note: full search demand data comes from Ahrefs Keyword Explorer. For this prompt, estimate relative search demand as H (high — informational/commercial head terms), M (medium — mid-tail templates), L (low — long-tail specifics). Prioritize H and M for 1st Go publishing batch. → Output: Demand tier assignment rationale. STEP 12: Clustering Logic Group topics by SERP intent overlap. Topics that likely share the same top-3 search results pages should be covered on one page. Topics with distinct SERPs get separate pages. For borderline topics (low SERP overlap): prefer creating one page covering multiple related queries over splitting into thin pages. Always check: does creating a separate page serve the user better, or does it create a thinner version of an existing page? → Output: Clustering decision rules for this specific topical map. STEP 13: Attribute Filtration Decisions Apply in strict order: 1. RELEVANCE: Remove topics where the attribute is not relevant to the source context (e.g., "mobile phone repair centers" for a review-only affiliate site). 2. PROMINENCE: Remove topics where the attribute is NOT defining for the central entity (e.g., "Germany football league" for a Germany visa site — league is not prominent for Germany as an entity; population IS). 3. POPULARITY: Deprioritize topics with no search demand — but do NOT remove if they are prominent; cover them as micro-context within a broader page instead. → Output: Filtration rationale with 3–5 examples of topics removed/kept and why. STEP 14: Categorization Framework For this topical map, define the category labels that will be used in Tab 4: Attribute categories: [list 6–8 attribute labels specific to this niche] Page types: definition, listicle, comparison, how-to, review, buying guide, news/trending, FAQ, directory Node types: Root (1 per map), Seed (1 per major outcome/attribute), Quality (core BoFu pages), Typical (standard outer section pages) Funnel stages: ToFu / MoFu / BoFu → Output: Category definitions table specific to this topical map. STEP 15: Publishing Momentum Strategy Based on the reverse-engineering target's estimated publishing frequency: 1st Go batch: 20–50 pages — all Seed nodes + most important Quality nodes from Core section. Focus on establishing topical coverage signal. 2nd Go batch: expand Outer section + deeper Core quality nodes. 3rd Go batch: long-tail, low-popularity typical nodes. Momentum rule: publish 1st Go at a frequency ≥ target competitor's publishing rate for the 2–3 weeks before the next broad core algorithm update. → Output: Publishing momentum plan with page counts per batch. STEPS 16–19: These are executed directly in the Excel output (Tabs 4–5). The macro semantic components (title, URL, image alt, meta description) and node type classification are produced in the workbook itself. ──────────────────────────────────────────────────────────────── SECTION C — EXCEL FILE NAME ──────────────────────────────────────────────────────────────── [BusinessName]_SemanticTopicalMap_by_MubashirHassan.xlsx ════════════════════════════════════════════════════════════════ SECTION D — 8-TAB EXCEL SPECIFICATIONS ════════════════════════════════════════════════════════════════ ──────────────────────────────────────────────────────────────── TAB 1 — "Raw Research Layer" (19 columns A–S) ──────────────────────────────────────────────────────────────── Purpose: Multi-source SERP + competitive intelligence research. Every query researched, its context, and its source. Row 1 — Headers (bold, yellow background #FFFF00): A: Source Context | B: Central Entity | C: Ontology & Taxonomy Keyword | D: Frame Semantic ID E: Wikipedia / Wikidata Definition | F: Buyer Persona | G: Competitor 1 Keyword H: Competitor 2 Keyword | I: Competitor 3 Keyword | J: Google Suggestions (A–Z) K: Related Searches | L: PAA Question | M: SERP Title (Top Ranking Page) N: Bold SERP Terms | O: Image Search Results | P: SERP Entity Classification Q: LLM Associated Entities | R: Query Source | S: Search Demand Tier (H/M/L) Row 2 — Source Context row: A: "Source Context" | B: [Your Source Context Statement] C: [Top-level ontology] | D: [Root Frame Semantic ID e.g. Criminal_Defense_Pakistan] E: [Wikipedia anchor for central entity] | F: [Primary buyer persona] G–I: [Competitor domain names] Data rows (minimum 60 rows — distribute across all 14 clusters below): Column instructions: A: blank (inherited from Row 2) B: blank C: Exact search query / ontology keyword D: Frame Semantic ID — CamelCase_Underscore — match Wikipedia entity style Example: Bail_Application_Pakistan, FIR_Quashing_Lahore, Criminal_Lawyer_Fees E: [Term] – one plain English Wikipedia-style definition sentence F: Specific buyer persona who searches this exact query (be granular) G: What Competitor 1 ranks for — a keyword variant of this query H: What Competitor 2 ranks for — a keyword variant of this query I: What Competitor 3 ranks for — a keyword variant of this query J: What Competitor 4 ranks for — a keyword variant of this query K: What Competitor 5 ranks for — a keyword variant of this query L: What Competitor 6 ranks for — a keyword variant of this query M: Google autocomplete expansion (add a letter before/after the query) N: "People also searched for" related search shown by Google O: PAA question for this topic — conversational, geo-specific, Pakistan-relevant P: Title tag of the top-ranking page for this exact query Q: The bolded term Google highlights in the SERP snippet R: Type of image shown in Google Images search for this query (e.g., infographic, photo, chart) S: SERP entity classification (how Google classifies the result — e.g., Law Firm, Attorney, Legal Guide) S: Related entity/task an AI model (ChatGPT, Claude, Gemini) would associate with this query T: Data source — one of: Query Semantics (Google) | Authority Hacking | Lexical Semantics | GSC Historical | LLM Ontology U: Search demand tier — H (high: >1000/mo), M (medium: 100–999/mo), L (low: <100/mo) Keyword clusters to cover (min 60 rows total): 1. Core service types (main service + variations, 6+ rows) 2. Outcome/problem types specific to niche (5+ rows) 3. Process / procedural keywords (5+ rows) 4. Jurisdiction / authority / regulatory body keywords (4+ rows) 5. Audience-specific person-type queries (4+ rows) 6. Urgency / emergency intent queries (3+ rows) 7. Geo-modified queries — 10+ cities, including primary city neighborhoods (8+ rows) 8. Pricing / budget / fee keywords (4+ rows) 9. Trust / credential / qualification keywords (4+ rows) 10. Comparison / alternative keywords (4+ rows) 11. Local payment method keywords (3+ rows) 12. Informational / educational / definition keywords (5+ rows) 13. How-to process keywords (4+ rows) 14. Trending / news / industry-specific keywords (3+ rows) Formatting: Col C light yellow #FFFACD. Freeze row 1. Auto-filter all columns. Col widths: A=15, B=35, C=34, D=30, E=44, F=40, G–I=28, J–L=38, M=44, N=26, O=32, P=22, Q=40, R=28, S=14. ──────────────────────────────────────────────────────────────── TAB 2 — "Query Templates & Outcomes" (6 columns A–F) ──────────────────────────────────────────────────────────────── Purpose: The scalable content multiplication layer — query templates mapped to derived outcome entities. Row 1 — Headers (bold, blue background #BDD7EE): A: Query Template Pattern | B: Taxonomy Level | C: Outcome Entity (example) D: Example Expanded Query | E: Funnel Stage | F: Est. Pages This Template Generates Data rows (minimum 15 query templates): A: Template pattern e.g. "best [Brand] [Entity] under $[Price]" or "[Entity] vs [Entity] comparison" B: Taxonomy level the template applies to (L1=category / L2=brand / L3=model / L4=feature) C: One example derived outcome entity to plug into the template D: Full example query with the outcome entity inserted E: ToFu / MoFu / BoFu F: Integer estimate of how many pages this template generates across all relevant outcomes Formatting: Freeze row 1. Col widths: A=40, B=18, C=28, D=44, E=12, F=14. ──────────────────────────────────────────────────────────────── TAB 3 — "Lexical & Entity Relations" (6 columns A–F) ──────────────────────────────────────────────────────────────── Purpose: Full lexical semantic map — what to cover in outer section and root documents. Row 1 — Headers (bold, dark green #375623, white font): A: Entity | B: Relation Type | C: Related Term | D: Source | E: Suggested Page Type | F: Core or Outer Data rows — cover central entity + top 5 derived outcomes: A: The entity this relation belongs to (central entity or derived outcome name) B: One of: Hypernym | Hyponym | Meronym | Synonym | Root Attribute | Prominent Attribute | Related Concept C: The related term itself D: Wikipedia / Wikidata / LLM / Ahrefs / Common Knowledge E: definition | how-to | listicle | comparison | root-document | FAQ | news F: Core or Outer Formatting: Freeze row 1. Col widths: A=28, B=22, C=30, D=20, E=22, F=12. ──────────────────────────────────────────────────────────────── TAB 4 — "Final Topical Map" (14 columns A–N) ──────────────────────────────────────────────────────────────── Purpose: The complete, publishing-ready content plan. Every page has all metadata needed to assign to a writer. Row 1 — Headers (bold, orange background #F4B942, dark text): A: Parent Topic | B: Child Topics (from clustering) | C: Search Demand | D: Competing URL E: Source | F: Core or Outer | G: Attribute Category | H: Page Type | I: Funnel Stage J: Node Type | K: Prominence | L: Publishing Momentum | M: Author/Assignee | N: Status Row 2 — Note: "Publishing momentum — 1st Go: launch batch 20–50 pages (all Seed + top Quality nodes). 2nd Go: Outer expansion + deeper Core. 3rd Go: long-tail typical nodes." Data rows (90–120 pages total — maintain proper Core/Outer ratio ≈ 40/60): Column instructions: A: Parent topic (the main page title concept — not the full title tag, that's Tab 5) B: 2–5 child/cluster topics that will be covered as H2s or micro-context on this page (comma-separated) C: H / M / L demand tier (from Step 11 reasoning) D: URL of the top-competing page for this topic (authority source or SERP leader) E: Query Semantics | Lexical Semantics | Authority Hacking | GSC | LLM Ontology F: Core or Outer G: Attribute category specific to this niche (from Step 14 category definitions) H: definition | listicle | comparison | how-to | review | buying-guide | news | FAQ | directory | landing-page I: ToFu | MoFu | BoFu J: Root | Seed | Quality | Typical K: H (entity cannot exist without this attribute) | M (important but not defining) | L (supplementary) L: 1st Go | 2nd Go | 3rd Go M: [blank — for team use] N: [blank — status tracking] Publishing categories to cover (create page rows for each): CAT 1 — Core Service / Product Pages (10–15 pages) — Node type: Quality/Seed CAT 2 — Outcome / Problem Type Pages (10–15 pages) — Quality/Seed CAT 3 — Process / How-It-Works Pages (8–12 pages) — Seed/Typical CAT 4 — Comparison / Alternative Pages (6–10 pages) — Quality/Typical CAT 5 — Geo-Modified Service Pages (15–20 pages, one per city tier) — Quality/Typical CAT 6 — Educational / Informational Pages (10–15 pages) — Seed/Typical (Outer) CAT 7 — How-To Guide Pages (8–12 pages) — Typical (Outer) CAT 8 — Trust / Credential / FAQ Pages (5–8 pages) — Seed/Typical CAT 9 — Trending / News / Seasonal Pages (5–8 pages) — Typical (trending nodes) CAT 10 — Root Document (1 page) — Root node — covers: what is [entity], lexical relations, history, types Formatting: Freeze row 1. Auto-filter. Alternate row colors white / #F2F7FF. Bold Core rows slightly. Col widths: A=38, B=44, C=10, D=44, E=26, F=10, G=22, H=16, I=10, J=10, K=10, L=10, M=20, N=14. ──────────────────────────────────────────────────────────────── TAB 5 — "Macro Semantic Components" (6 core columns + entity rows) ──────────────────────────────────────────────────────────────── Purpose: The processed topical map — every macro semantic component needed to publish and signal semantic completeness to search engines. Row 1 — Headers (bold, purple background #7030A0, white font): A: Parent Topic | B: Page Title Tag (max 60 chars) | C: URL Slug D: Image URL (placeholder) | E: Image Alt Text | F: Meta Description For all 90–120 pages from Tab 4, write: B — Page Title Tag (max 60 chars — count characters): Format: [Primary Entity + Outcome] : [Angle/Adjective/Year/Location] Primary part: must include the core entity + outcome/attribute Secondary part (after colon): adjective (best, top, complete), year (2024/2025), location, or benefit angle Check: Does it match what top SERP competitors are doing? Are you using the right adjectives from your engram research? Hard rule: NO title over 60 characters. Count every character including spaces. C — URL Slug (lowercase, hyphens, no stop words, hierarchical): Format: /[entity-taxonomy-path]/[attribute-or-outcome]/ Examples: /criminal-defense/bail-hearings-lahore/ /mobile-phones/iphone-15-pro-max-review/ /mobile-phones/brands/samsung/galaxy-s24-specs/ Hard rules: lowercase only, hyphens not underscores, no special characters, no stop words in slug (remove: a, the, in, of, for), hierarchical structure reflects entity taxonomy D — Image URL placeholder: Format: /images/[entity]-[attribute]-[context].jpg Example: /images/criminal-defense-bail-hearing-lahore.jpg E — Image Alt Text: Format: [Entity] + [Attribute] + [Context/Location] Max 125 characters. Must include primary entity + main attribute + geo/context where relevant. Example: "Criminal defense lawyer discussing bail hearing process in Lahore High Court" F — Meta Description (145–160 chars): Structure: [Primary entity + outcome statement]. [Key benefit or differentiator]. [Call to action or question hook.] Must contain: primary entity term, key attribute, location (if geo page), and at least 2 secondary terms from the child topics of that page. Derive from: top-ranking competitor meta descriptions + table-of-contents headings of the competitor's page — rewritten in your own words. For the top 10 priority pages (1st Go batch), also add an Entity Coverage sub-table below each page: Row structure: Entity Name | Entity Type | Attribute 1 | Attribute 2 | Attribute 3 | Keyword Variant Minimum 15 entities and 15 attributes per priority page. Entity types: Person, Organization, Place, Legal Concept, Product, Event, Process, Document, Role Formatting: Freeze row 1. Col widths: A=38, B=36, C=38, D=36, E=40, F=50. Purple header. ──────────────────────────────────────────────────────────────── TAB 6 — "Content Brief System" (per-page article briefs) ──────────────────────────────────────────────────────────────── Purpose: Writer-ready briefs for the 10 highest-priority pages (from 1st Go batch). For each of 10 pages, include a full brief block: PAGE TITLE: [Exact title from Tab 5] URL: [Exact slug from Tab 5] NODE TYPE: [Root/Seed/Quality/Typical] WORD COUNT TARGET: [e.g., 2,500–3,500 words] TARGET BUYER PERSONA: [Which persona from Step 3.1] PRIMARY QUERY TEMPLATE: [Which template from Tab 2 this page exemplifies] INTERNAL LINKS NEEDED: [2–4 anchor texts + target page topics they should link to] HEADING STRUCTURE: H1: [Exact H1 — may differ from title tag; can be longer] H2: [Section heading] → Methodology: [what to cover, how to approach it, sources] H2: [Section heading] → Methodology: [...] H2: [Section heading] → Methodology: [...] H2: [Section heading] → Methodology: [...] H3 (under relevant H2): [Subsection] → Note: [micro-context instruction] H2: FAQ Section → Include 3–5 PAA questions from Tab 1 Col L KEY ENTITIES TO MENTION: [8–10 entities from Tab 5 entity coverage] KEY ATTRIBUTES TO COVER: [8–10 attributes] ENGRAMS TO INCLUDE IN INTRO: [5–8 high-priority N-grams that define the page's central search intent] COMPETING PAGE TO ANALYZE: [URL from Tab 4 Col D — study their H2 structure and PAA coverage] Formatting: Each brief in a distinct row block. Dark blue #1F3864 header row per brief. Auto-filter. ──────────────────────────────────────────────────────────────── TAB 7 — "38 Semantic Writing Rules" (all rules + examples) ──────────────────────────────────────────────────────────────── Purpose: The complete author rulebook for semantic content writing. Row 1 — Headers (bold, orange-red background #C0392B, white font): A: Rule # | B: Rule Name | C: Rule Description | D: Example (Niche-Specific) | E: Why It Matters (Semantic Signal) Include ALL 38 Koray Tugberk GUBUR semantic writing rules covering: EVA Model (Entity-Vector-Attribute), heading vector logic, entity declarations in first paragraph, sentence-level semantic density, context completeness requirements, macro vs micro context separation, FAQ section structure, internal link anchor text semantics, image alt text as semantic signal, topical coverage ratios, content freshness signals, duplicate entity mention handling, secondary entity introduction rules, subject-predicate-object completeness, co-occurrence optimization, contextual synonyms usage, and more. Each rule must have: a niche-specific example using the business details from Section A. Formatting: Freeze row 1. Alternate row colors. Col widths: A=8, B=24, C=50, D=48, E=40. ──────────────────────────────────────────────────────────────── TAB 8 — "Locations Topical Map" (8 columns A–H) ──────────────────────────────────────────────────────────────── Purpose: Complete geographic entity coverage for Pakistan — every location tier from province to neighborhood. Row 1 — Merged grey label: "Pakistan Geographic Entity Coverage — Topical Map" Row 2 — Headers (bold, teal background #008080, white font): A: Province | B: Division | C: City | D: Town | E: Neighborhood | F: Tehsil | G: Region Type | H: Area Type Col A — 7 Provinces + 2 Territories: Punjab, Sindh, Khyber Pakhtunkhwa (KPK), Balochistan, Islamabad Capital Territory (ICT), Azad Jammu & Kashmir (AJK), Gilgit-Baltistan (GB) Col B — 35 Divisions: Lahore, Rawalpindi, Faisalabad, Multan, Gujranwala, Sargodha, Bahawalpur, DG Khan, Sahiwal, Karachi, Hyderabad, Sukkur, Larkana, Mirpurkhas, Shaheed Benazirabad, Peshawar, Mardan, Hazara, Malakand, Kohat, Bannu, Quetta, Kalat, Makran, Sibi, Nasirabad, Zhob, Islamabad, Mirpur (AJK), Muzaffarabad (AJK), Gilgit (GB), Baltistan (GB) Col C — 55+ cities (include full list from Karachi to Gilgit) Col D — 18+ towns (Kahuta, Murree, Taxila, Attock, Kamra, Havelian, Haripur, etc.) Col E — Neighborhoods (adapt to primary service city): If Lahore: DHA Phase 1–9, Gulberg I–V, Model Town, Johar Town, Bahria Town Lahore, Garden Town, Iqbal Town, Shadman, Cavalry Ground, Cantt, Township, Samanabad, Faisal Town, Wapda Town, Lake City If Karachi: Clifton, DHA Phase 1–8, Gulshan-e-Iqbal, PECHS, Nazimabad, North Karachi, Korangi, Malir, Saddar, Lyari, Orangi Town, Bahria Town Karachi, Gulistan-e-Johar If Islamabad: F-6, F-7, F-8, F-10, G-9, G-10, G-11, I-8, I-10, E-7, DHA Islamabad, Bahria Town Islamabad, PWD Colony, Bani Gala, Rawal Town, Saidpur Village Col F — 35+ Tehsils (complete list including all major tehsils per province) Col G — 20 Region Types: Metropolitan City, Provincial Capital, Divisional HQ, District HQ, Tehsil Town, Union Council, Cantonment Area, Industrial Zone, SEZ, Tourist Destination, Border Town, Coastal City, Riverine Area, Hill Station, Valley Region, Desert Region, Plateau Region, Mountain Region, Agricultural Zone, Mining Region Col H — 25+ Urban & Rural Area Types: Karachi Metro, Lahore Metro, Islamabad-Rawalpindi Twin Cities, Faisalabad Industrial Metro, Peshawar Metro, Quetta Metro, Hyderabad-Qasimabad, Gujranwala Industrial Belt, Multan City of Saints, Sialkot Sports Hub, Northern Pakistan Rural, FATA Merged Districts, Balochistan Rural, Sindh Rural, Punjab Agricultural Belt, Coastal Balochistan, Azad Kashmir Mountain Areas, Gilgit-Baltistan Territory, Thar Desert, Cholistan Desert, Hill stations — Murree/Nathiagali/Ayubia, Hill stations — KPK Swat/Kalam/Naran Formatting: Row 1 merged grey. Row 2 bold teal/white. Alternating rows white/#F2F2F2. Col widths: A=30, B=42, C=26, D=42, E=52, F=32, G=32, H=46. Freeze rows 1–2. ════════════════════════════════════════════════════════════════ SECTION E — GLOBAL FORMATTING RULES ════════════════════════════════════════════════════════════════ - Font: Arial 10 throughout all tabs - Wrap text: all cells - All cell borders: thin #D9D9D9 - No formula errors (#REF!, #DIV/0!, #N/A, etc.) - Tab colors: Tab1=yellow, Tab2=blue, Tab3=green, Tab4=orange, Tab5=purple, Tab6=dark blue, Tab7=orange-red, Tab8=teal - Auto-filter on all data tabs - Freeze row 1 on all tabs (Freeze rows 1–2 on Tab 8) ════════════════════════════════════════════════════════════════ SECTION F — CONTENT QUALITY STANDARDS ════════════════════════════════════════════════════════════════ Based on Koray Tugberk GUBUR's Semantic SEO & Mubashir Hassan Topical Authority framework: 1. Tab 1 ≥ 60 data rows | Tab 2 ≥ 15 query templates | Tab 3 ≥ 60 lexical relation rows 2. Tab 4: 90–120 pages | Core/Outer ratio ≈ 40/60 | Node type distribution: 1 Root, 8–12 Seed, 25–35 Quality, 50+ Typical 3. Tab 5: All 90–120 pages get macro semantic components. Top 10 get full entity sub-tables (15+ entities, 15+ attributes each). 4. Tab 6: 10 full article briefs for 1st Go priority pages 5. Tab 7: All 38 semantic writing rules with niche-specific examples 6. Tab 8: Complete Pakistan geographic data — 200+ rows Frame Semantics format: CamelCase_Underscore e.g. Bail_Application_Pakistan Wikipedia definition format: [Term] – [plain English one-line definition] PAA questions: conversational, geo-specific, question-format (not statements) Title tags: Hard 60-char max — count every character URL slugs: lowercase, hyphens, hierarchical entity taxonomy path, no stop words Meta descriptions: 145–160 chars — include primary entity + key attribute + secondary terms Publishing momentum: Only 1st Go for first page of each Main Seed Category Consistency: same entities, brand terms, and entity declarations across all tabs ════════════════════════════════════════════════════════════════ SECTION G — FINAL DELIVERABLE ════════════════════════════════════════════════════════════════ Save as: [BusinessName]_SemanticTopicalMap_by_MubashirHassan.xlsx Move to output folder. Provide download link. After generating, confirm: Tab 1: Total keyword rows (target: ≥60) Tab 2: Total query templates (target: ≥15) Tab 3: Total lexical relation rows (target: ≥60) Tab 4: Total pages — Core count vs. Outer count — Node type breakdown (Root/Seed/Quality/Typical) Tab 5: Pages with macro semantic components + entity sub-tables for top 10 Tab 6: Number of full briefs produced Tab 7: All 38 rules confirmed ✓ Tab 8: Total geographic entries ════════════════════════════════════════════════════════════════ END OF MASTER PROMPT ════════════════════════════════════════════════════════════════

Frequently Asked Questions

Simple answers to the most common questions about topical maps, semantic SEO, and how this prompt works — explained with real examples.

A topical map is your master content plan. Imagine you sell shoes online. Instead of writing random articles, a topical map tells you exactly which 100 articles to write — shoe brands, types, sizes, care guides, buying guides — all connected to each other. Google sees all these articles together and says: "This website really knows shoes." That's what gets you ranked higher than your competitors.

Topical authority means Google trusts your website as an expert on a specific topic. Think of it like being the best doctor in your city. People trust you more than a random clinic because you have years of experience in one area. A website with 80 articles about running shoes — covering brands, training tips, care guides, size charts — has topical authority. A site with only 5 articles does not, even if those 5 are excellent.

Source context simply means: how does your business make money? For example, if you run an online laptop store, your source context is: "Sell laptops to customers online — earning from each product sale." This is the compass for your whole topical map. Every article you write should connect back to this goal. If you sell laptops, writing about refrigerators is off-topic and wastes your content budget.

Your central entity is the one main topic your entire website revolves around. It must exist on Wikipedia or Google's Knowledge Panel. For a shoe store it's "Shoes." For a digital agency it's "Digital Marketing." For a law firm it's "Lawyer." Everything on your site connects back to this one entity. You can verify it by searching the word in Google — if a Knowledge Panel appears on the right side of results, it's a verified entity and a good choice.

Core Section = articles that directly make you money. These are product reviews, buying guides, brand comparisons, and service pages. Example: "Best Nike Running Shoes 2026" or "Running Shoes Under Rs. 5,000." Outer Section = helpful background articles that build trust. Example: "How to Clean Running Shoes" or "History of Nike." Outer Section doesn't sell directly, but it brings in early visitors and proves to Google that you are a real expert — which helps your Core pages rank higher.

Old SEO was about repeating keywords: stuff "best shoes" into your article 50 times. Semantic SEO is smarter. It's about teaching Google the meaning behind your content. You write naturally about shoes, mention brands, materials, sizes, use cases, and related words. Google understands that all these words together mean you really know about shoes — and ranks you for hundreds of related searches, not just one keyword. It's the difference between a keyword-stuffed ad and an expert article.

It's four easy steps: 1. Fill in your business details in Section A of the prompt — your website, niche, and how your business earns money. 2. Click the Copy button to copy the whole prompt. 3. Paste it into Claude (recommended) or ChatGPT with code execution turned on. 4. The AI reads your business details, follows all 19 research steps automatically, and generates a complete 8-tab Excel file with your topical map, query templates, content briefs, and 200+ location entries. It takes about 10–20 minutes.

A query template is a reusable search pattern with a blank space. Example: "Best [Brand] shoes under [Price]." You can swap Nike, Adidas, or Puma for [Brand], and Rs. 3,000 or Rs. 7,000 for [Price]. One template creates 20 article ideas instantly. This is how big content sites publish hundreds of articles efficiently — they find 10-15 templates and apply them to all their products or services. The prompt identifies these templates for your specific niche automatically.

Yes, you need all three. ToFu (Top of Funnel) is for people just learning — example: "What is a running shoe?" They are not ready to buy yet. MoFu (Middle of Funnel) is for people comparing — example: "Nike vs Adidas running shoes." They are getting closer to buying. BoFu (Bottom of Funnel) is for people ready to purchase — example: "Buy Nike Air Zoom in Lahore." Without ToFu and MoFu content, visitors never discover your site. Without BoFu content, they never convert into customers.

Authority hacking means studying the websites already winning in your niche to learn from their topic coverage. It is not copying their content. It's like a student reading the top-scoring exam papers to understand which topics the examiner values most. You look at which articles they have, how many pages they publish per month, and what topics they haven't covered yet. Then you build a plan to cover the same topics better — plus fill in the gaps they missed.

Think of your website like a city. Root Node is the city center — your most important foundation page, like "What is Digital Marketing." Seed Nodes are neighborhoods — category pages like "SEO Services" or "Social Media Marketing." Quality Nodes are your best shops — high-value money pages like "Best SEO Packages in Pakistan 2026." Typical Nodes are regular streets — standard informational articles. Internal links always point from regular streets toward the city center.

Lexical relations are the word connections around your main topic. For the word "Laptop": a hypernym is "Computer" (the bigger category), a hyponym is "Gaming Laptop" (a specific type), a meronym is "Keyboard" (a part of the laptop), and a synonym is "Notebook." Mapping these helps you find dozens of article topics you would have missed. Google uses these word relationships to check whether you truly understand your topic — or just know a few keywords.

Publishing momentum means publishing your articles in the right order, not randomly. Think of it like building a house — you pour the foundation first, then build walls, then add furniture. 1st Go: publish your 20-50 most important Core pages. 2nd Go: expand Outer Section and add more depth. 3rd Go: fill in long-tail and low-priority topics. Publishing in consistent waves — faster than your main competitor — signals to Google that your site is growing with a real plan, not randomly.

Yes — completely. This prompt is designed to work for any business: ecommerce stores, law firms, digital agencies, restaurants, SaaS companies, coaching businesses, hospitals, and more. You simply change the business details in Section A. The AI adapts the entire 19-step process to your specific niche, industry, and how your business earns money. The examples in the prompt show an SEO agency, but you can replace them with your own business type.

With this AI prompt, Claude or ChatGPT generates the complete 8-tab Excel topical map in 10–20 minutes, depending on how detailed your business information is. Manually, a professional SEO strategist would take 2–3 weeks to research and build a map of this quality using Ahrefs, Google Search Console, Wikipedia, and manual clustering. The prompt compresses the entire 19-step process into one automated session.

Attribute filtration removes topics that don't belong in your topical map. You filter in this exact order: (1) Relevance — does this topic connect to how your business makes money? Remove if no. (2) Prominence — is this topic important enough that your audience expects to find it? Example: "shoe size" is prominent for a shoe store; "shoe factory history" probably isn't. (3) Popularity — do people actually search for it? This order ensures you never delete a relevant topic just because its search volume looks low.

Deep means covering one narrow area brilliantly — like writing 50 detailed articles about running shoes before touching hiking boots. Vast means covering all shoe types with just a few articles each. For new websites, going Deep wins. Why? Because Google ranks sites that demonstrate real expertise, not sites that have one article about everything. Once you dominate running shoes and start ranking, then you expand to hiking boots, casual shoes, and formal shoes. Going vast from day one usually means ranking for nothing.

Macro semantic components are the key HTML signals that tell Google what your page is about before it reads the full article. They include: Page Title (what shows in Google results — max 60 characters), URL Slug (the web address — short, keyword-rich, no stop words), Image Alt Text (description of your featured image for Google Images), and Meta Description (the 2-sentence summary shown under your title in search results). Getting these right is the first step to ranking — they are what AI systems like Google's Gemini and ChatGPT Search read first.

SERP clustering groups keywords by what Google actually shows for each search — not just by keyword similarity. If "best running shoes" and "top running shoes 2026" both show the exact same 4 websites in Google search results, they belong on one page, not two. Regular keyword tools group by word similarity, which can lead you to write 10 articles competing with each other. SERP-based clustering prevents this by using real search behavior to decide which keywords share a page.

Claude is recommended for the best results because it follows complex multi-step instructions most accurately and produces the highest quality structured Excel output. ChatGPT with code execution enabled works well too — just make sure the code execution toggle is on before pasting the prompt. Gemini can generate files but sometimes struggles with very long prompts. If you are using Claude, use Claude Sonnet or Claude Opus for best output quality.

Your reverse-engineering target is one competitor website you aim to outrank. Choose a site that: (1) has a lower Domain Rating (DR) than the big authority sites in your niche, (2) was created 1–3 years ago (not 10+ years), (3) is already getting decent traffic for your target keywords. Avoid picking massive sites like Wikipedia or Forbes — you can't beat them yet. Pick a realistic, beatable target. Then study their exact topic coverage and publishing speed, and create a plan to beat them at the next Google core update.

Yes — it is built specifically with Pakistan in mind. Tab 8 of the Excel output includes a full Locations Topical Map with 200+ geographic entries: all provinces, 35+ divisions, 55+ cities, towns, neighborhoods in Lahore/Karachi/Islamabad, and tehsils. It also generates geo-modified query templates for each location tier — like "SEO services in Lahore," "digital marketing agency Karachi," and "best laptop store Islamabad." This helps businesses rank across all Pakistan cities simultaneously, not just their home city.

Ready to Build Your Topical Map?

Copy the Master Prompt, fill in your business details, and let Claude execute the full 19-step methodology. Your complete semantic content architecture — from source context to publishing momentum.