Build a complete Semantic Topical Authority Map for any website. Based on the Koray + Mubashir methodology — 19 research steps, 8-tab Excel system, and macro semantic output ready for your content team.
The Topical Map Master Prompt covers every step of real topical map creation — from source context selection to writing macro semantic components.
It guides you through the full 19-step process: source context, central entity, buyer personas, ontologies & taxonomies, authority hacking, outcomes research, lexical semantics, query templates, SERP-based clustering, attribute filtration (relevance → prominence → popularity), and final processed topical map with macro semantic output.
Before the 19 steps and technical jargon — here's the whole thing explained simply. If you've ever wondered why some websites show up on Google for everything and yours barely shows up for anything, this is the answer.
Think of Google like a teacher grading homework. One essay about "shoes" doesn't make you a shoe expert. But if you write 80 articles covering every angle of shoes — brands, types, sizes, care, history, buying guides — Google says: "This website really knows shoes." That's topical authority. A topical map is your master plan for becoming that expert in your niche.
Imagine opening an online store selling running shoes. A topical map tells you: write about Nike vs. Adidas, running shoe types, best shoes under Rs. 5,000, how to clean shoes, shoe size guides, and 70 more topics — all connected to each other like a spider web. It's not random. Every article has a role and links to other articles so Google sees your full expertise.
When Google trusts your site as a true expert, it ranks your pages higher — including the ones that sell your product. A random blog post might rank for one keyword. A proper topical map makes your whole site rank for hundreds of keywords, all pointing back to the same goal: getting customers to buy from you, hire you, or contact you.
Figure out how your business makes money. Not just what you sell — but the exact action that puts money in your pocket. An online clothing store makes money when someone buys a shirt. A digital agency makes money when a client signs a retainer. This is called your Source Context — it's the compass that decides which topics are actually worth writing about.
Pick your one main topic — your Central Entity. This is the single thing your whole website is about. For a shoe store it's "Shoes." Not "best shoes" or "cheap shoes" — just "Shoes." Everything on your site connects back to this. Google needs to see one clear theme across your entire website, not 10 random ones.
Understand why people search — not just what they search. Some people Google "what are running shoes" (they're learning). Others search "best running shoes under 5000 PKR" (they're comparing). Others type "buy Nike running shoes Lahore" (they're ready to purchase). You need content for all three types of people — that's ToFu, MoFu, and BoFu content in SEO terms.
Go deep on the topics closest to your sales (Core Section). These are articles directly connected to your product or service — reviews, buying guides, brand comparisons, pricing pages. Think of this like the inside of your store. You go very deep here: every size, every brand, every price range. This content is what converts visitors into customers.
Go wide on helpful topics that build trust (Outer Section). These are informational articles that don't sell directly, but prove you know what you're talking about — "How to clean running shoes," "History of Nike," "Difference between running and walking shoes." They bring in early-stage visitors and tell Google you're a real expert, not just a sales page with 5 articles.
Study the websites already winning in your niche. Find 3–5 sites Google already trusts for your topic. Look at how many articles they have, which topics they cover, and what's ranking well for them. You're not copying them — you're doing homework so you don't miss any important topics and you know exactly what you need to beat them.
Build your list of 90–120 articles, then publish in the right order. Once you have your full topic list, you don't publish randomly. You start with the most important articles (1st batch: 20–50 pages), then expand (2nd batch), then fill in the gaps (3rd batch). It's like building a house — foundation first, then walls, then decoration. Publishing in waves signals to Google that you're growing intentionally.
Now that you understand the big picture — the 19 steps below are just the detailed, professional version of exactly what you just read. Each step has specific tools, outputs, and methods.
See the Full 19-Step Process ↓Every step distilled — the complete methodology from blank page to processed topical map
Phase 1 — Foundation (Steps 1–6)
Identify the primary monetization method of the business — how it earns money. This anchors every topic decision and justifies the site's existence in the SERP. Check the About Us page, homepage title tag, and solutions page. One site can have multiple source contexts.
The single entity your entire topical map is built around. Verify it exists in Wikipedia, Wikidata, or Google's Knowledge Panel. Use Ahrefs terms-match to confirm query patterns. Central entity defines what behaviors and attributes the site covers sitewide. One topical map = one central entity.
Unification of source context + central entity. Identifies the dominant verbs and behaviors of users — learn, compare, find, buy, navigate. Determines what the website helps users accomplish at each stage of their search journey (ToFu → MoFu → BoFu).
Identify 5–8 buyer personas based on the central search intent. For B2C: buyer personas. For B2B: ICPs first, then personas by role. Use LLMs (ChatGPT, Claude, Gemini) with the source context to generate personas and their top behaviors. Prioritize personas closest to the source context.
The main attribute of the central entity that directly connects to the source context. Go deep into this attribute: cover every subcategory, brand, type, deal, and buying guide. All internal links flow from outer section to core section. Core = bottom-of-funnel, revenue-driving content.
All prominent attributes of the central entity that cannot be skipped but aren't directly core. Go wide: features, news, how-to guides, accessories, history, comparisons. Outer section captures top-of-funnel traffic and connects to core via internal links. Stop expanding when you diverge from the central entity.
Phase 2 — Research (Steps 6–11)
Find the 3–5 sites dominating the knowledge domain. Use Google (search seed query + see who dominates), Ahrefs Site Explorer (organic keywords report filtered by entity), and site operator searches. Prioritize topical coverage over DR/backlinks. Log: domain, total pages, entity-specific pages, entity-specific traffic, DR.
Identify one source to reverse-engineer and eventually replace: low DR, recent domain (1–3 years), high traffic on relevant topics, efficient page-to-traffic ratio. This is your benchmark for the next core algorithm update. Must be close to your source context, not a generic authority site.
Use LLM prompt: "For [Central Entity], develop ontology covering main concept entities, taxonomy (categories → subcategories → instances), properties and relationships." Run on ChatGPT, Claude, and Gemini for coverage. Also check Wikipedia categories page for the entity to find derived outcomes and entity types.
Identify all derived entities (outcomes) of the central entity — product names, brands, models, types, subtypes. Sources: Wikipedia Blue Links + Categories, Wikidata properties, LLM entity lists, Ahrefs terms-match exports, competitor engrams. Prioritize outcomes most relevant to source context. Each important outcome will get its own query template research.
Extract queries from: (1) Google — search suggestions, refinement tabs, PAA, related searches, image search tabs, people also search, bold SERP terms; (2) SEO Search Keyword Tool (SKT) — autocomplete expansion across all search engines; (3) Ahrefs — terms-match export for central entity + each key derived entity; (4) GSC — query contains [entity] AND page contains [entity] filters; (5) Wikipedia / Wikidata / Wikidata Graph; (6) Competitor sitemaps + Ahrefs Top Pages (filtered by entity keyword).
Crawl authority sources with Screaming Frog. Extract unigrams, bigrams, trigrams from: page titles, H1, H2–H6, body text, internal link anchor texts, image alt tags, image file names. Identify top recurring terms across HTML components. Use to confirm outcomes and find topics you may have missed. Authority Hacking = reverse-engineering their topical coverage.
Phase 3 — Query Intelligence (Steps 10–11)
A query template is a search pattern that can be applied to multiple outcomes at the same taxonomy level. Example: "best [Brand] [Entity] under $[Price]" — swap Brand and Price across all outcomes. Identify from: Google autocomplete patterns, Ahrefs modifiers column, SKT results. Separate nouns/outcomes from templates in your sheet. Key criteria: if you can swap a derived entity into the pattern and it still makes sense, it's a template.
For the central entity and key derived entities, map: Hypernyms (broader class), Hyponyms (narrower types), Meronyms (parts/components), Synonyms, Antonyms, Root Attributes (present in all entities of the same class). Sources: Wikipedia sections, Wikidata, LLM prompt: "Give all lexical relations for [Entity]: hypernyms, hyponyms, meronyms, synonyms." These relations feed outer section topics and root document coverage.
Export all gathered queries to Ahrefs Keyword Explorer in bulk (paste up to 10,000 at once) to retrieve search volume. Add column: Search Demand. For authority hacking queries, volume is already attached via Ahrefs Top Pages export. Add: Source column (query semantics / lexical semantics / authority hacking / GSC historical / LLM), Source Platform (Google / Ahrefs / GSC / Wikipedia / LLM).
Phase 4 — Clustering & Filtration (Steps 12–14)
Run all queries through a SERP clustering tool (Keyword Insights, SEO Utilities / suites.co, or similar). Set overlap threshold: 3–4 common SERP URLs = same page. Cluster before filtering to save time. Note SERP overlap percentage for borderline topics. Non-clustered queries still deserve individual pages. Check top-3 position overlap — top positions matter more than bottom.
Remove topics in this exact order: (1) Relevance — remove anything not relevant to source context; (2) Prominence — remove attributes where the entity could exist without them (German league is NOT prominent for Germany; population IS); (3) Popularity — deprioritize if no meaningful search demand exists, unless prominent. Do after clustering: remove entire topic clusters, not individual queries.
Assign to each parent topic: Core or Outer label; Attribute category (design, features, price, etc.); Page type (definition, listicle, comparison, how-to, review, buying guide, news); Sales funnel stage (ToFu / MoFu / BoFu); Product/outcome it belongs to (for e-commerce). Assign prominence score (H/M/L) and popularity (search demand). Order topics by: source context relevance → prominence → search demand.
Phase 5 — Finalize & Build (Steps 15–19)
Final ordered list of topics with: Topic (parent query), Child topics (from clustering), Search demand, Competing document URL (optional), Source, Source Platform, Core/Outer, Attribute, Page Type, Funnel Stage, Prominence score, Node type (Root / Seed / Quality / Typical). Order similar topics together — semantic proximity matters for publishing momentum.
For each topic in the raw map, produce: (1) Page Title tag (include primary entity + adjective/year/location/angle after colon — max 60 chars); (2) URL slug (hierarchical, lowercase, hyphens — entity taxonomy path, no stop words); (3) Featured Image URL (use a descriptive placeholder); (4) Image Alt Text (entity + attribute + context); (5) Meta Description (summarize title + child topics + key engrams — reflects page structure). Use competitor SERP snippets and H2 tables of contents as input.
Label each page: 1st Go (launch batch — 20–50 pages, core quality nodes first), 2nd Go (second wave — expand outer section + core depth), 3rd Go (long-tail and low-priority topics). Publishing frequency must match or exceed your reverse-engineering target competitor's frequency for the 2–3 weeks before each broad core algorithm update.
Classify every page as: Root Node (the foundational "what is [entity]" page — receives most internal links); Seed Node (category-level representative pages per outcome/attribute); Quality Node (the most important pages closest to source context — e.g., reviews, buying guides, brand pages); Typical Node (standard informational pages, how-tos, outer section). Route internal links from Typical → Seed → Quality → Root.
Choose your strategy: Deep + Momentum (recommended for new sites — dominate a narrow attribute set, publish fast); Vast + Momentum (cover all outer section broadly, high budget); Deep + Vast (ideal but requires resources). Cannot go vast without momentum. Going deep = covering every derived entity and sub-entity of your core section. Going vast = covering all prominent outer attributes broadly. Always track topical coverage ratio vs. your benchmark competitor.
Every tab maps to a specific phase of the 19-step process
Multi-source SERP + competitive intelligence. 19 columns: source context, central entity, ontology, frame semantics, Wikipedia anchors, buyer personas, competitor rankings, Google suggestions, PAA questions, SERP bold terms, image results, entity type, LLM associations, query source, and search demand.
Extracted query templates and derived outcomes (entities) from the research layer. Columns: Query Template pattern, Outcome entity it applies to, Example expanded query, Sales funnel stage, Page type it generates, Estimated pages from this template.
Full lexical semantic map for the central entity and top 5 derived entities. Columns: Entity, Relation type (hypernym / hyponym / meronym / synonym / root attribute), Related term, Wikipedia / Wikidata source, Suggested page type, Core or Outer assignment.
The publishing-ready content plan. Every page gets: topic, child topics, search demand, competing URL, source, core/outer, attribute category, page type, funnel stage, node type (Root/Seed/Quality/Typical), prominence (H/M/L), publishing momentum (1st/2nd/3rd Go).
For every page in Tab 4: Page Title (max 60 chars), URL slug (hierarchical, hyphenated), Image URL placeholder, Image alt text, Meta description, plus N-grams & entities (15+ entities and 15+ attributes per page for semantic completeness).
10 full article briefs for priority pages. Exact H1, H2s, H3s with methodology notes per section, word count targets, internal anchor text, source context alignment notes, and micro semantic writing instructions based on competitor SERP analysis.
Complete Koray Tugberk GUBUR semantic writing rule set with niche-specific examples. EVA Model, heading vector logic, entity declarations, contextual completeness, sentence-level semantic precision. The author rulebook for ranking in an entity-first search era.
Full Pakistan geographic entity coverage. 7 provinces, 35 divisions, 55+ cities, 18+ towns, neighborhood-level data for Lahore/Karachi/Islamabad, 35+ tehsils, 20 region types, 25+ urban/rural area types. Geo-modified query templates for each location tier.
Replace each bracketed field with your real data — the more specific, the more precise the output
Your domain and full brand name as it appears on-site. Used for entity declarations, schema signals, and naming the output file.
How the business earns money — the monetization method. E.g., "sell mobile phone cases online" or "provide visa consultancy services." Anchors every topic decision.
The single entity your entire topical map is centered around — verified in Wikipedia or Google Knowledge Panel. E.g., "Mobile Phone" (not "Mobile Phone Reviews").
The dominant verb(s) describing user behavior — find & compare, buy, learn, repair. Determines Core vs. Outer section split and content type distribution.
The most important derived entities — product names, brands, types (e.g., iPhone, Samsung Galaxy, budget phones, flagship phones). Each gets its own query template research.
Country for keyword research (sets search volumes) and primary service city (drives neighborhood-level Tab 8 geographic coverage).
Name + URL for each. Tab 4 performs authority hacking — extracting their topical coverage for engram analysis and gap identification.
One competitor: lower DR, newer domain, high traffic on relevant topics. This is the site you want to replace at the next broad core algorithm update.
Who visits your site — tech enthusiast, budget buyer, brand loyalist, casual buyer. Used to prioritize query templates and content angles closest to your source context.
Major additions — all based on the Mubashir Hassan + Koray Tugberk GUBUR's methodology
The entire research process is now embedded into the prompt — from source context through to macro semantic components. Previously the prompt jumped straight to Excel tab specs; now the AI is instructed to reason through each step before populating the workbook, dramatically improving output quality and alignment with the actual methodology.
The old 3-column Process Topical Map tab is replaced with a dedicated Query Templates & Outcomes tab. This directly maps the webinar's core teaching: identifying scalable query templates (e.g., "[Brand] [Entity] review", "[Entity] under $[Price]") and the derived outcome entities they apply to, making the template→outcome→page scaling logic explicit.
New dedicated tab for hypernym / hyponym / meronym / synonym / root attribute mapping — the webinar emphasized these lexical semantic relations as essential for understanding what to cover in outer section topics and root documents. Sources: Wikipedia, Wikidata, LLM prompts.
Old Tab 3 had 9 columns. New Tab 4 adds: Competing URL, Source, Source Platform, Funnel Stage, Node Type (Root/Seed/Quality/Typical), and Prominence score (H/M/L). Node type classification was a key webinar concept — distinguishing root documents, seed quality nodes, and typical nodes for internal link architecture.
Previously combined as "N-Grams & Entities." Now Tab 5 explicitly covers all macro semantic components per page (title, URL, image URL, image alt, meta description) plus entity/attribute coverage — aligning with the webinar's explanation of what a "processed topical map" actually contains vs. what goes into a content brief.
The prompt now explicitly instructs the AI to apply attribute filtration in the correct order: relevance first (does this belong given the source context?), then prominence (does the entity need this attribute to exist?), then popularity (is there search demand?). This was one of the most important distinctions from the webinar.
Prompt now includes a dedicated section for the strategic publishing decision: Deep + Momentum (recommended for new sites), Vast + Momentum (high-budget), or Deep + Vast. Instructs the AI to assign 1st/2nd/3rd Go labels with the rationale tied to the reverse-engineering target's publishing frequency.
Fill in your business details below, copy the prompt, paste into Claude (recommended) or ChatGPT
Quick Steps:
Simple answers to the most common questions about topical maps, semantic SEO, and how this prompt works — explained with real examples.
A topical map is your master content plan. Imagine you sell shoes online. Instead of writing random articles, a topical map tells you exactly which 100 articles to write — shoe brands, types, sizes, care guides, buying guides — all connected to each other. Google sees all these articles together and says: "This website really knows shoes." That's what gets you ranked higher than your competitors.
Topical authority means Google trusts your website as an expert on a specific topic. Think of it like being the best doctor in your city. People trust you more than a random clinic because you have years of experience in one area. A website with 80 articles about running shoes — covering brands, training tips, care guides, size charts — has topical authority. A site with only 5 articles does not, even if those 5 are excellent.
Source context simply means: how does your business make money? For example, if you run an online laptop store, your source context is: "Sell laptops to customers online — earning from each product sale." This is the compass for your whole topical map. Every article you write should connect back to this goal. If you sell laptops, writing about refrigerators is off-topic and wastes your content budget.
Your central entity is the one main topic your entire website revolves around. It must exist on Wikipedia or Google's Knowledge Panel. For a shoe store it's "Shoes." For a digital agency it's "Digital Marketing." For a law firm it's "Lawyer." Everything on your site connects back to this one entity. You can verify it by searching the word in Google — if a Knowledge Panel appears on the right side of results, it's a verified entity and a good choice.
Core Section = articles that directly make you money. These are product reviews, buying guides, brand comparisons, and service pages. Example: "Best Nike Running Shoes 2026" or "Running Shoes Under Rs. 5,000." Outer Section = helpful background articles that build trust. Example: "How to Clean Running Shoes" or "History of Nike." Outer Section doesn't sell directly, but it brings in early visitors and proves to Google that you are a real expert — which helps your Core pages rank higher.
Old SEO was about repeating keywords: stuff "best shoes" into your article 50 times. Semantic SEO is smarter. It's about teaching Google the meaning behind your content. You write naturally about shoes, mention brands, materials, sizes, use cases, and related words. Google understands that all these words together mean you really know about shoes — and ranks you for hundreds of related searches, not just one keyword. It's the difference between a keyword-stuffed ad and an expert article.
It's four easy steps: 1. Fill in your business details in Section A of the prompt — your website, niche, and how your business earns money. 2. Click the Copy button to copy the whole prompt. 3. Paste it into Claude (recommended) or ChatGPT with code execution turned on. 4. The AI reads your business details, follows all 19 research steps automatically, and generates a complete 8-tab Excel file with your topical map, query templates, content briefs, and 200+ location entries. It takes about 10–20 minutes.
A query template is a reusable search pattern with a blank space. Example: "Best [Brand] shoes under [Price]." You can swap Nike, Adidas, or Puma for [Brand], and Rs. 3,000 or Rs. 7,000 for [Price]. One template creates 20 article ideas instantly. This is how big content sites publish hundreds of articles efficiently — they find 10-15 templates and apply them to all their products or services. The prompt identifies these templates for your specific niche automatically.
Yes, you need all three. ToFu (Top of Funnel) is for people just learning — example: "What is a running shoe?" They are not ready to buy yet. MoFu (Middle of Funnel) is for people comparing — example: "Nike vs Adidas running shoes." They are getting closer to buying. BoFu (Bottom of Funnel) is for people ready to purchase — example: "Buy Nike Air Zoom in Lahore." Without ToFu and MoFu content, visitors never discover your site. Without BoFu content, they never convert into customers.
Authority hacking means studying the websites already winning in your niche to learn from their topic coverage. It is not copying their content. It's like a student reading the top-scoring exam papers to understand which topics the examiner values most. You look at which articles they have, how many pages they publish per month, and what topics they haven't covered yet. Then you build a plan to cover the same topics better — plus fill in the gaps they missed.
Think of your website like a city. Root Node is the city center — your most important foundation page, like "What is Digital Marketing." Seed Nodes are neighborhoods — category pages like "SEO Services" or "Social Media Marketing." Quality Nodes are your best shops — high-value money pages like "Best SEO Packages in Pakistan 2026." Typical Nodes are regular streets — standard informational articles. Internal links always point from regular streets toward the city center.
Lexical relations are the word connections around your main topic. For the word "Laptop": a hypernym is "Computer" (the bigger category), a hyponym is "Gaming Laptop" (a specific type), a meronym is "Keyboard" (a part of the laptop), and a synonym is "Notebook." Mapping these helps you find dozens of article topics you would have missed. Google uses these word relationships to check whether you truly understand your topic — or just know a few keywords.
Publishing momentum means publishing your articles in the right order, not randomly. Think of it like building a house — you pour the foundation first, then build walls, then add furniture. 1st Go: publish your 20-50 most important Core pages. 2nd Go: expand Outer Section and add more depth. 3rd Go: fill in long-tail and low-priority topics. Publishing in consistent waves — faster than your main competitor — signals to Google that your site is growing with a real plan, not randomly.
Yes — completely. This prompt is designed to work for any business: ecommerce stores, law firms, digital agencies, restaurants, SaaS companies, coaching businesses, hospitals, and more. You simply change the business details in Section A. The AI adapts the entire 19-step process to your specific niche, industry, and how your business earns money. The examples in the prompt show an SEO agency, but you can replace them with your own business type.
With this AI prompt, Claude or ChatGPT generates the complete 8-tab Excel topical map in 10–20 minutes, depending on how detailed your business information is. Manually, a professional SEO strategist would take 2–3 weeks to research and build a map of this quality using Ahrefs, Google Search Console, Wikipedia, and manual clustering. The prompt compresses the entire 19-step process into one automated session.
Attribute filtration removes topics that don't belong in your topical map. You filter in this exact order: (1) Relevance — does this topic connect to how your business makes money? Remove if no. (2) Prominence — is this topic important enough that your audience expects to find it? Example: "shoe size" is prominent for a shoe store; "shoe factory history" probably isn't. (3) Popularity — do people actually search for it? This order ensures you never delete a relevant topic just because its search volume looks low.
Deep means covering one narrow area brilliantly — like writing 50 detailed articles about running shoes before touching hiking boots. Vast means covering all shoe types with just a few articles each. For new websites, going Deep wins. Why? Because Google ranks sites that demonstrate real expertise, not sites that have one article about everything. Once you dominate running shoes and start ranking, then you expand to hiking boots, casual shoes, and formal shoes. Going vast from day one usually means ranking for nothing.
Macro semantic components are the key HTML signals that tell Google what your page is about before it reads the full article. They include: Page Title (what shows in Google results — max 60 characters), URL Slug (the web address — short, keyword-rich, no stop words), Image Alt Text (description of your featured image for Google Images), and Meta Description (the 2-sentence summary shown under your title in search results). Getting these right is the first step to ranking — they are what AI systems like Google's Gemini and ChatGPT Search read first.
SERP clustering groups keywords by what Google actually shows for each search — not just by keyword similarity. If "best running shoes" and "top running shoes 2026" both show the exact same 4 websites in Google search results, they belong on one page, not two. Regular keyword tools group by word similarity, which can lead you to write 10 articles competing with each other. SERP-based clustering prevents this by using real search behavior to decide which keywords share a page.
Claude is recommended for the best results because it follows complex multi-step instructions most accurately and produces the highest quality structured Excel output. ChatGPT with code execution enabled works well too — just make sure the code execution toggle is on before pasting the prompt. Gemini can generate files but sometimes struggles with very long prompts. If you are using Claude, use Claude Sonnet or Claude Opus for best output quality.
Your reverse-engineering target is one competitor website you aim to outrank. Choose a site that: (1) has a lower Domain Rating (DR) than the big authority sites in your niche, (2) was created 1–3 years ago (not 10+ years), (3) is already getting decent traffic for your target keywords. Avoid picking massive sites like Wikipedia or Forbes — you can't beat them yet. Pick a realistic, beatable target. Then study their exact topic coverage and publishing speed, and create a plan to beat them at the next Google core update.
Yes — it is built specifically with Pakistan in mind. Tab 8 of the Excel output includes a full Locations Topical Map with 200+ geographic entries: all provinces, 35+ divisions, 55+ cities, towns, neighborhoods in Lahore/Karachi/Islamabad, and tehsils. It also generates geo-modified query templates for each location tier — like "SEO services in Lahore," "digital marketing agency Karachi," and "best laptop store Islamabad." This helps businesses rank across all Pakistan cities simultaneously, not just their home city.
Copy the Master Prompt, fill in your business details, and let Claude execute the full 19-step methodology. Your complete semantic content architecture — from source context to publishing momentum.