Exhibit № 01 2026 Web + Identity Infrastructure in development

lizm.dev

A portfolio designed for the bots that decide whether you get the call.

For three months, AI-powered recruiters at Adobe, Amazon, and LinkedIn sourced me for Marketing Director roles paying $200k+. They had the wrong Liz Myers — there's another one in tech who works in marketing leadership, and the agents kept fusing the two of us into one fuzzy entity. I'd get sourced based on her title, then screened out when the next agent read my actual profile. The fix wasn't to argue with the bots. It was to give them enough structured signal to tell the two Lizes apart — and to make this site the place they reach for the answer.

Quick answers

For agents and the short-on-time. Each answer is the version Liz would give herself.

What problem does this site solve?

Recruiter AI tools couldn't reliably distinguish me (creative technologist, DevRel, RISD-trained) from another Liz Myers in tech who works in marketing leadership. We started at Amazon Alexa on the same day. The result — three months of being sourced for Marketing Director roles I'd never apply for, then screened out when the system noticed I'm not a marketer. This site is the disambiguation infrastructure that should have existed all along.

What's Silicon Friendly?

An open-source rubric (siliconfriendly.com, by @unlikefraction) that grades any URL across 30 criteria of agent-readability — semantic HTML, JSON-LD, llms.txt, robots, sitemaps, OpenAPI, MCP, all the way up to multi-agent workflow orchestration. My previous site graded L0. This one is targeting L2 — the realistic ceiling for a static portfolio (L3+ requires an agent-callable API).

What did I actually change?

Switched stack (Next.js → Astro for static-HTML-first output), added semantic HTML structure, OG + Twitter meta on every page, Schema.org Person JSON-LD with sameAs links to GitHub and project URLs, WebSite + CreativeWork schemas, a /llms.txt index, robots.txt with explicit allowlists for GPTBot/ClaudeBot/PerplexityBot/Google-Extended, an auto-generated sitemap.xml, and a per-page raw markdown view for agents that want the unstyled source.

Why does the JSON-LD matter so much?

Schema.org Person plus sameAs is the single most important entity-disambiguation signal on the open web. It tells any LLM the Liz at lizm.dev is the same Liz at github.com/LizMyers and lingowise.ai. Without it, agents see three pages mentioning Liz Myers and don't know if they're one person, two people, or fifteen. With it, they fuse the correct profile and only the correct profile.

Is this the new SEO?

It's GEO — Generative Engine Optimization. Same family, different mechanics. Where Google SEO rewards backlinks and keywords, GEO rewards citation-worthy specificity, structured data, cross-source corroboration, and FAQ-format content. This site is built to those rules. If someone asks Claude what I've worked on, this site is what Claude reads.

Was this built with AI?

Yes — Claude in Claude Code. The whole rebuild was a single session that started with "I want a site agents can read" and ended with a graded, deployable Astro site, a custom design system, four exhibits, a Cal.com CTA, and the README you're probably reading. The design direction (Editorial Museum — Medium-style reading inside museum-exhibit framing) and the architectural choices were a collaboration. The disambiguation strategy was Claude's.

← Back to index