lizm.dev
A portfolio designed for the bots that decide whether you get the call.
The bug
For three months, my inbox filled with the same email: a recruiter at Adobe, or Amazon, or LinkedIn, asking if I’d be interested in a Marketing Director role at $200k+. Polished outreach. Specific compensation. Wrong person.
There’s another Liz Myers in tech. She and I started at Amazon Alexa on the same day. She’s a marketing leader. I’m a creative technologist. AI sourcing tools — increasingly the front line of recruiting — kept collapsing the two of us into a single fuzzy entity. I’d get sourced for her job, then screened out by the next agent in the pipeline when it read my actual profile and saw apps and code instead of campaigns and growth.
This is a kind of mistake the open web makes constantly now. Entity collision: two people with the same name, similar enough trajectories, indistinguishable to a model that doesn’t have the right signals.
The realization
I could keep replying to the wrong recruiters, individually, forever. Or I could fix the upstream problem.
What the bots needed wasn’t more pages about me. The other Liz has plenty of those. They needed structured signal — the kind that lets a model say “the Liz Myers who shows up at lizm.dev is the same one at github.com/LizMyers, and she is not the marketing one.” Schema.org has had a Person type for over a decade. It has a sameAs field for exactly this. The web just doesn’t use them.
So I gave the bots a small, well-labelled corpus to reach for.
What’s actually on this site for them
Six things, none of them visible to a human visitor:
PersonJSON-LD in the head of every page, withjobTitle: "Creative Technologist", an explicit description that does not contain the word “marketing”, andsameAscross-links to GitHub, the project repos, and the live products.WebSiteandCreativeWorkschemas on the index and each project page, so an agent crawling once gets the relationships between me, this site, and each thing I’ve made.- An
/llms.txtat the root — markdown index of every page, with a disambiguation line as the first paragraph. - A
robots.txtthat explicitly welcomes the AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, CCBot) instead of hand-wringing about whether to block them. - An auto-generated
sitemap.xmlso crawlers can find everything in one fetch. - OG and Twitter card metadata on every page, including the headshot from this site — which is now the same photo on github.com/LizMyers, on the Blue Plaque marker in the upper-left, and on the favicon. Cross-channel visual consistency is one of the strongest entity-resolution signals you can send.
The architectural choice that makes this cheap is Astro — static HTML first byte, no hydration tax, every page renderable by any crawler that can speak HTTP.
The receipt
Silicon Friendly is an open-source rubric that grades any URL across 30 agent-readability criteria. The previous version of my site graded L0 — failing almost every row. This one is built to the rubric, top to bottom. Levels 3 through 5 are about agent-callable APIs (MCP servers, webhooks, idempotency keys) — not applicable to a static portfolio. The realistic target for a site like this is L2, with every Level 1 + 2 row passing.
The before screenshot lives in the repo. The after will land next to it once the site is graded.
What I learned
The web is being read mostly by machines now, and the machines are answering questions on behalf of humans who don’t have time to scroll. If your work doesn’t survive that translation — if a bot can’t confidently say “yes, this is the right person, here’s what she does, here’s how to reach her” — you don’t exist in the conversation.
You can fix that in an afternoon. It’s mostly typing.