A Developer’s Guide to Building a Dining Recommender: From Prompt Engineering to Map Integration
tutorialmapsLLM

A Developer’s Guide to Building a Dining Recommender: From Prompt Engineering to Map Integration

wwebdevs
2026-02-04
10 min read
Advertisement

Recreate a Where2Eat-style dining micro app: LLM prompts, user preferences, Google/Waze integration, and deployment patterns for 2026.

Ship a lightweight dining recommender that actually helps people decide — fast

Decision fatigue kills plans and group chats. If your team or product needs a fast, reproducible micro app to recommend restaurants — with LLM-driven personalization and live navigation links — this guide walks you through rebuilding Rebecca Yu’s Where2Eat-style app in a production-ready way. You’ll get concrete prompts, a user-preferences model, Google Maps and Waze integration, and deployment patterns for 2026.

Why this matters in 2026

Since late 2024, LLMs have become reliable building blocks, and by 2025–2026 the ecosystem matured around function calling, on-device inference, and low-latency vector stores for retrieval-augmented generation (RAG). Micro apps — single-purpose tools built quickly — are now common for both personal use and as product prototypes. For a dining recommender, that means we can combine a compact preference model, structured LLM outputs, and maps APIs to deliver suggestions plus one-tap navigation.

Overview: architecture and components

We’ll implement a micro app with the following components:

  • Client (SPA): React/Next.js for UI and maps integration.
  • Serverless API: lightweight functions for LLM calls, caching, and maps proxies.
  • LLM: provider supporting function calling or structured JSON output.
  • Vector DB / Cache: optional for RAG and preference history (Pinecone, Weaviate, or Redis for small scale).
  • Maps: Google Maps Places API for search/details and Waze/Google Maps URL intents for navigation.

Why serverless?

Serverless lets you scale on demand, minimizes ops, and fits the micro app mentality. It also improves security because API keys for maps and LLMs live behind the function, not in the client.

Step 1 — Design the user preferences model

Start with a compact, typed preference model that's easy for an LLM to consume and update. Keep it small to limit context size and costs.

// types.ts
export type Cuisine = 'italian' | 'sushi' | 'mexican' | 'american' | 'vegan' | 'dessert'

export interface UserPrefs {
  userId: string
  tasteScores: Record // 0..1
  priceSensitivity: 0 | 1 | 2 // 0 = cheap, 1 = moderate, 2 = premium
  distanceKm: number // preferred max
  lastVisited?: string[] // place_ids
}

Key ideas:

  • Use normalized scores for cuisines so the LLM can reason quantitatively.
  • Store only what's needed: avoid PII and raw location logs where possible.
  • Keep lastVisited short (e.g., last 10) for freshness and to prevent repeat suggestions.

Step 2 — Prompt engineering for recommendations

In 2026, function-calling and schema-aware outputs are widely supported. Build prompts that:

  • Accept a compact preferences JSON
  • Return structured recommendations with confidence scores
  • Ask clarifying questions when uncertain

Example system + user prompt

const system = `You are a dining recommender. Output JSON exactly matching the schema. Prefer places within distanceKm and match cuisine scores. If user preferences are incomplete, ask 1 short clarifying question.`

const user = `Prefs: ${JSON.stringify(userPrefs)}. Context: user is at lat=${lat}, lon=${lon}. Time: ${timestamp}. Return up to 5 recommendations.`

Function schema (pseudo)

{
  "name": "recommend_restaurants",
  "description": "Return array of recommended restaurants",
  "parameters": {
    "type": "object",
    "properties": {
      "recommendations": {
        "type": "array",
        "items": {
          "type": "object",
          "properties": {
            "name": {"type": "string"},
            "place_id": {"type": "string"},
            "score": {"type": "number"},
            "reason": {"type": "string"}
          },
          "required": ["name","place_id","score"]
        }
      },
      "clarifying_question": {"type": ["string","null"]}
    },
    "required": ["recommendations"]
  }
}

When you call the LLM, set temperature low (0–0.2) for deterministic, reliable outputs. Use a short response length and enforce the schema via function calling so you get valid JSON you can parse safely.

Step 3 — Enrich suggestions with Google Maps Places

LLMs can suggest restaurants by name, but for location accuracy, call the Places API to resolve and annotate results.

Flow

  1. LLM returns 3–5 candidate names or intents
  2. Your serverless function queries Google Places Text Search / Nearby Search with the candidate string and user's lat/lon
  3. Merge Places results into the recommendation object using place_id
// serverless function (pseudo)
async function resolvePlace(candidate, lat, lon, googleKey) {
  const q = encodeURIComponent(candidate + ` near ${lat},${lon}`)
  const url = `https://maps.googleapis.com/maps/api/place/textsearch/json?query=${q}&key=${googleKey}`
  const r = await fetch(url)
  return r.json()
}

Use Places place_id to produce navigation links and fetch details like rating, opening hours, and price level.

Step 4 — Waze and Google Maps integration for instant navigation

Users expect a one-tap path from recommendation to navigation. Provide both app deep links and web fallbacks.

Waze

App deep link:

waze://?ll={lat},{lon}&navigate=yes

Web fallback:

https://www.waze.com/ul?ll={lat},{lon}&navigate=yes

Google Maps

App / web link using place_id or coordinates:

https://www.google.com/maps/search/?api=1&query={lat},{lon}

Provide both; detect platform to surface the best option. Example React snippet:

function openNavigation(lat, lon) {
  const waze = `waze://?ll=${lat},${lon}&navigate=yes`
  const webWaze = `https://www.waze.com/ul?ll=${lat},${lon}&navigate=yes`
  // try open app scheme, fallback to web
  window.location.href = waze
  setTimeout(()=> window.open(webWaze, '_blank'), 500)
}

Step 5 — Cost, rate limiting, and API security (2026 advice)

APIs cost money and quotas change often. In 2026, expect more granular pricing for both LLMs and Maps. Protect yourself:

  • Restrict API keys to specific referrers/IPs in Google Cloud Console and your LLM provider dashboard.
  • Cache responses for identical queries — Places details change slowly. Use a CDN or Redis for 30–60 minute TTL for place lookups. See a practical case study on reducing query spend here.
  • Batch calls where possible: resolve multiple candidate names in parallel with a single function that respects provider rate limits.
  • Monitor spend with alerts and auto-scaling limits; many vendors now offer usage-based budgets.

Step 6 — Implement RAG for local knowledge (optional)

If you want the app to factor in private data (restaurant lists, curated menus, event info), use a vector DB for retrieval. In 2026, managed vector services are cheaper and faster; but for a micro app you can often start with a small open-source vector store or even Redis. The micro-app template pack has patterns for small-scale RAG integrations.

Workflow:

  1. Embed your private docs or curated lists (e.g., user-saved favorites)
  2. On recommendation request, retrieve top-k context and pass it to the LLM as part of the prompt
  3. Merge the LLM’s structured output with Places results

Step 7 — Deployment and CI/CD patterns

Choose a deployment that matches your team’s priorities:

  • Vercel / Netlify — best for Next.js frontends and edge functions. Super fast to deploy and integrate with GitHub for micro apps. See the 7-day micro app launch playbook for an opinionated deploy flow.
  • Cloud Run / AWS Fargate — if you need containerized backends and more control.
  • AWS Lambda / Functions-as-a-Service — good if you want per-invocation billing for low-traffic micro apps.
  • Deno Deploy / Cloudflare Workers — low latency edge execution; perfect for quick LLM proxy functions and maps caching. Edge patterns are discussed in several serverless/edge guides, including practical notes on serverless edge.

Example GitHub Actions workflow: build, test, deploy to Vercel

name: CI
on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v2
        with:
          version: 8
      - run: pnpm install --frozen-lockfile
      - run: pnpm test
      - name: Deploy to Vercel
        uses: amondnet/vercel-action@v20
        with:
          vercel-token: ${{ secrets.VERCEL_TOKEN }}
          vercel-org-id: ${{ secrets.VERCEL_ORG }}
          vercel-project-id: ${{ secrets.VERCEL_PROJECT }}

Keep secrets in GitHub Secrets: LLM keys, GOOGLE_MAPS_API_KEY, and any vector DB creds.

Step 8 — Observability, testing, and user feedback

Start small but measure rigorously. Capture metrics:

  • LLM latency and error rates
  • Places API calls and cache hit ratio
  • Recommendation accept rate (user taps navigate)
  • Cost per recommendation

Add Sentry or an APM (Datadog, New Relic) to serverless functions. For UX, instrument the client with lightweight analytics to see which suggestions perform best and which clarifying questions users respond to.

Security and privacy checklist

  • Never expose Map or LLM keys in client-side code.
  • If you store user location, encrypt it at rest and provide a clear retention policy.
  • Support a privacy-first mode: allow recommendations without saving location by using ephemeral sessions.
  • Comply with regional rules (GDPR, CCPA) — provide data export/deletion hooks.

Example end-to-end flow (concise)

  1. User opens micro app; grants location permission.
  2. Client sends compact userPrefs + lat/lon to a serverless endpoint.
  3. Serverless function calls the LLM with function schema, receives candidate names.
  4. Server resolves candidates using Google Places, merges metadata, caches results, and returns structured recommendations.
  5. Client renders suggestions and shows navigation buttons (Waze / Google Maps).

Prompt engineering examples (concrete)

Use a short system message and pass prefs as JSON. Here’s a minimal example you can copy:

// serverless handler (pseudo)
const system = `You are a concise restaurant recommender. Output JSON with recommendations array. If you need more info, set clarifying_question.`

const payload = {
  model: 'gpt-4o-mini',
  messages: [
    { role: 'system', content: system },
    { role: 'user', content: `Prefs:${JSON.stringify(userPrefs)}; Location:${lat},${lon}; Time:${time}` }
  ],
  temperature: 0.1,
  functions: [ functionSchema ]
}

const res = await fetch(LLM_API_URL, { method: 'POST', headers, body: JSON.stringify(payload) })

Note: replace model names with the current provider’s recommended models. In 2026, many providers support tiny, fast models ideal for micro apps — balance latency vs. reasoning capability.

Advanced strategies and future-proofing (2026+)

  • Local-first personalization: keep preference logic client-side for privacy, send only anonymized signals to the LLM.
  • Hybrid models: use small local LLMs for intent parsing and cloud LLMs for reasoning when needed.
  • Edge functions: move simple LLM prompts and caching to the edge (Cloudflare Workers / Vercel Edge) to shave off hundreds of ms.
  • Offline capability: store the last cache so the recommender works briefly without network connectivity — useful for on-the-go usage. See micro-app patterns in the micro-app template pack and the 7-day micro app launch playbook.

Common pitfalls and how to avoid them

  • Unstructured LLM output: use function-calling or schema validation libraries (zod) to guard against malformed responses.
  • API abuse: rate-limit per user and require authentication for high-cost endpoints.
  • Over-personalization: avoid training your scoring purely on engagement — include freshness and exploration to prevent echo chambers.
"Micro apps succeed when they solve one real user pain fast, with low friction and minimal maintenance." — Practical advice adapted from the 2025 micro-app trend.

Sample repo layout

  • /app - Next.js frontend
  • /api - serverless functions: /recommend, /resolve-place, /prefs
  • /lib - LLM client, maps client, caching utils
  • /infra - IaC (Terraform) for secrets, CDN, and vector DB

Wrap-up: build, iterate, and keep it light

Recreating Rebecca Yu’s Where2Eat is a practical way to learn prompt engineering, map integration, and serverless deployments. Start with a small preference model, enforce structured outputs from the LLM, enrich with Google Places for reliable metadata, and provide immediate navigation via Waze and Google Maps links. In 2026, leverage edge execution, function-calling, and low-cost vector stores to keep latency and spend down.

Actionable checklist

  1. Define UserPrefs and persist minimal state.
  2. Write a concise system prompt + function schema for recommendations.
  3. Implement a serverless / edge function to call LLM and Google Places.
  4. Cache Places responses and monitor API spend.
  5. Deploy to Vercel / Netlify and add a simple GitHub Actions workflow.

Want a starter repo and deployable example?

If you want, I can generate a ready-to-run Next.js starter with serverless endpoints, LLM prompt templates, and Google/Waze integration — including GitHub Actions and Vercel config. Tell me your preferred LLM provider and whether you want Pinecone or Redis for caching, and I’ll scaffold it. For pattern libraries and templates, check the micro-app template pack and the 7-day micro app playbook.

Next step: reply with your stack choices (Next.js / React, LLM provider, map APIs), and I’ll produce the scaffolded files and secrets checklist so you can deploy a working dining recommender in under a day.

Advertisement

Related Topics

#tutorial#maps#LLM
w

webdevs

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T16:11:55.813Z