Logo
Check Lost Sales

How AI decides which companies to recommend

When someone asks ChatGPT, "Who's the best [service] in [city]?" the AI doesn't flip a coin. It doesn't check a paid directory. It runs an evaluation process that weighs specific types of digital evidence and recommends the companies with the strongest signals. Understanding this evaluation process is the first step to influencing it. Here's exactly how AI makes the decision.

Get Your Free AI Visibility Audit Supporting text: Six-category assessment. Evidence-based analysis. Free.

Am I on ChatGPT?

The six evidence categories AI tools weigh when deciding which companies deserve a recommendation

AI language models evaluate businesses across six evidence categories before generating a recommendation: content depth and relevance, reputation signals from reviews, entity consistency across platforms, third-party authority and validation, structured data clarity, and topical expertise demonstration. The companies scoring highest across these categories get named.

It's important to understand that AI doesn't have a ranking algorithm the way Google does. There's no PageRank equivalent. There's no secret formula with specific weightings. Instead, AI language models process everything they know about businesses in a category and location, and generate a response based on where the evidence converges.

Think of it like asking a well-read friend for a restaurant recommendation. Your friend doesn't have a spreadsheet ranking restaurants by score. They recall what they've read, what they've heard, what seems most relevant to your specific question, and recommend the place where the most positive signals align. AI works similarly, except it's "read" millions of web pages instead of a few dozen reviews.

Here's each evidence category in detail:

AI evaluates whether your website provides enough detailed, specific information to justify a recommendation. A website with thin, generic content ("We provide quality services!") gives AI nothing to work with. A website with specific service descriptions, process explanations, pricing information, and answers to common customer questions gives AI rich evidence to synthesize.

What AI looks for specifically:

  • Pages dedicated to each service you offer, not one page listing everything
  • Content written in natural language that addresses customer concerns

Specific details: service process, expected outcomes, timelines, costs

Educational content demonstrating expertise (guides, how-to articles, FAQ)

Content freshness: recently updated pages signal an active business

AI reads review text across platforms. It processes both the sentiment (positive/negative) and the specifics mentioned (which services, which outcomes, which qualities). Businesses with many detailed, positive reviews create a pattern AI can match to future queries.

What AI looks for specifically:

  • Volume of reviews (more is better, but quality matters too)
  • Specificity of review text (mentions of services, staff, outcomes)

Consistency of positive sentiment across platforms

Recency of reviews (recent reviews signal current quality)

Cross-platform review presence (Google, Yelp, industry platforms)

AI cross-references your business information across multiple sources. When your name, address, phone number, hours, and service descriptions are identical everywhere, AI trusts that the information is accurate. When there are mismatches, AI loses confidence.

What AI looks for specifically:

  • Exact match of business name across all platforms

Consistent address, phone number, and hours

Matching service descriptions and business categories

No contradictory information (old addresses, discontinued services)

AI distinguishes between what you say about yourself and what others say about you. Self-promotion on your own website is expected but not especially convincing. Mentions on independent third-party sources (media, professional associations, community organizations, industry publications) are treated as stronger evidence of quality and relevance.

What AI looks for specifically:

  • Mentions on local or national media outlets

Listings on professional association directories

Community organization acknowledgements (chamber of commerce, awards)

Industry publication features or citations

References on trusted review and comparison platforms

Schema markup helps AI extract your business information cleanly. Without structured data, AI must interpret unstructured text and guess what your business is. With schema, AI receives labelled, organized data it can process with high confidence.

What AI looks for specifically:

  • Local Business schema (or specific type) with complete attributes

Service schema defining each service offered

Review schema for aggregated ratings

FAQ schema for question-and-answer content

Proper implementation (no errors or incomplete markup)

AI evaluates whether your business demonstrates genuine expertise in your field. A dental practice that publishes detailed content about dental procedures, oral health, and treatment options demonstrates topical authority that a dental practice with a three-page brochure website doesn't.

What AI looks for specifically:

  • Depth of content covering your professional domain

Credentials and qualifications of your team

Evidence of specialization or focused expertise

Educational content that helps potential customers make decisions

Consistency between claimed expertise and review evidence

Walking through ai's complete evaluation process for a specific business recommendation query

Let me demonstrate how all six categories work together for a real query:

  • Query: "Can you recommend a good family dentist in Frisco, Texas for a family with young kids?"

ChatGPT processes this query and identifies the parameters: family dentist, Frisco Texas, young kids.

It then evaluates every dental practice in Frisco it has evidence for:

  • Practice A (gets recommended):
  • Content: 18-page website with a dedicated "Pediatric Dentistry" page describing their approach to treating children, a "Your Child's First Visit" page addressing parental anxiety, and a "Family Dentistry" page explaining their whole-family approach

Reviews: 267 Google reviews, 4.8 average. Dozens mention "great with my kids," "my 4-year-old actually likes going," and "the whole family goes here"

Consistency: Perfect match across Google, Healthgrades, Yelp, the Texas Dental Association, and their website

Third-party: Listed on the Frisco Chamber of Commerce, mentioned in a Frisco Family Magazine "best dentists for kids" article

Schema: Dentist schema with services including "pediatric dentistry," Review schema, FAQ schema

Expertise: Published content about children's dental development milestones, cavity prevention for kids, and when to start orthodontic evaluation

Practice B (doesn't get recommended):

  • Content: 4-page website with a "Services" page listing "General Dentistry, Cosmetic Dentistry, Orthodontics, Pediatric Dentistry" as bullet points with no further detail

Reviews: 43 Google reviews, 4.6 average. A few mention kids but without specifics

Consistency: Google shows old address. Yelp has wrong phone number

Third-party: None found

Schema: None

Expertise: No published content about any dental topic

The gap isn't subtle. Practice A gave AI rich, specific, consistent evidence across all six categories. Practice B gave AI almost nothing. AI recommends Practice A with confidence. It skips Practice B entirely.

Both practices might be equally good at treating children. AI can't evaluate that. It can only evaluate the evidence, and the evidence gap is massive.

Real example: A pest control company in Orlando investigated why ChatGPT recommended a competitor with roughly the same reputation and years in business. The evidence audit revealed the gap was concentrated in two categories: content depth (the competitor had 15 service-specific pages covering ants, termites, rodents, mosquitoes, bed bugs, and commercial pest control; they had a single "Services" page) and reviews (the competitor had 193 reviews with many mentioning specific pest types; they had 67 reviews that were mostly generic). The pest control company-built service-specific pages and launched a review campaign targeting pest-specific feedback. Within about 90 days, ChatGPT began recommending them alongside the competitor. The owner mentioned that understanding the specific evidence categories made the fix feel manageable rather than overwhelming.

A practical prioritization of which evidence categories have the biggest impact on AI recommendations

Not all six categories carry equal weight. Based on patterns observed across many businesses and markets, here's a practical prioritization:

  • Highest impact: Reviews (Category 2) and Content Depth (Category 1)

These two categories together account for most AI's recommendation confidence. A business with 200+ specific reviews and a comprehensive website dominates a business with 20 generic reviews and a thin website, even if the thin-website business has better schema and more directory listings. Reviews and content are the primary evidence.

High impact: Entity Consistency (Category 3) and Third-Party Validation (Category 4)

Consistency is a trust multiplier. It doesn't generate recommendations on its own, but inconsistency actively prevents recommendations even when other signals are strong. Third-party mentions are authority accelerators. A single media mention or professional association listing can push a business over the recommendation threshold.

Moderate impact: Structured Data (Category 5) and Topical Expertise (Category 6)

Schema markup makes AI's job easier but doesn't compensate for thin content or sparse reviews. Topical expertise reinforces content depth and builds long-term authority but takes longer to establish than the more actionable categories above.

The practical takeaway: start with reviews and content. Fix consistency. Earn third-party mentions. Implement schema. Build topical authority over time. This prioritization produces the fastest results.

Are You on ChatGPT?

Find Out Free

Most popular pages

Industry AI Search

How Fertility Clinics Can Get Recommended by AI Search Engines

She and her husband have been trying to conceive for fourteen months. She finally decides to see a specialist. She does not know where to start: whether she needs a reproductive endocrinologist or a gynecologist, whether her OB can refer her or whether she can self-refer, or what an initial fertility evaluation involves. She opens ChatGPT and types: "We have been trying to get pregnant for over a year. Should I see a reproductive endocrinologist or can my OB handle this? What would the first appointment involve?" ChatGPT explains the standard infertility workup, describes when a reproductive endocrinologist is appropriate, and confirms that self-referral is typically possible without waiting for an OB. Then she asks: "How do I choose a fertility clinic? What should I look for in IVF success rates?" ChatGPT explains how to interpret CDC and SART success rate data, what to look for in live birth rates versus clinical pregnancy rates, and why the patient population a clinic serves affects reported numbers. Then she types: "Best fertility clinic near me in [city] with good success rates, board-certified reproductive endocrinologist." ChatGPT names two clinics. She calls the first. Your clinic has two board-certified reproductive endocrinologists, publishes SART-verified success rates, offers a compassionate patient navigator program for new patients, and is in-network with her employer's fertility benefits. ChatGPT named someone else. Not because your outcomes are weaker. Because the two clinics it named had documented their SART success rates, board certifications, employer benefit acceptance, and patient care approach in AI-readable formats, and yours had not.