Masthead

An independent
AI research
publication.

Solved By Code covers the open AI frontier — frontier models, security, open source, and the long-term horizon — under a simple editorial contract: every claim sourced, every open-source release tested, every correction stamped in place.

What we cover

Four beats. One frontier.

Frontier

The models and labs shaping the field.

Daily signal on releases from Anthropic, OpenAI, DeepMind, Meta, Mistral, xAI, and the rising frontier of Chinese labs. Independent analysis of capability claims, benchmark scores, and the strategic moves behind each release. Not press-release stenography.

Security

Red-teaming, jailbreaks, privacy, alignment.

Vulnerability disclosures, prompt-injection research, model jailbreak techniques, training-data leaks, and alignment-failure incidents. Coverage informed by OWASP LLM Top 10, NIST AI RMF, and primary-source security research — not tech-press paraphrasing.

Open Source

Releases, tooling, ecosystem moves.

Open-weight model releases, agent frameworks, inference stacks, and tooling ecosystems. Every major open-source release is installed, benchmarked, and — where possible — contributed back to. See The Lab for the long-form technical reviews.

Horizon

Where AI is actually going.

Long-form essays on capability curves, economic impact, labour displacement, policy moves, and the decade-scale questions nobody else will answer on a deadline. Grounded in primary data — not vibes.

Editorial principles

Rules of the house.

Every claim is sourced.

Numbers, benchmarks, and quotes link inline to primary sources: papers, model cards, release tags, policy documents, security disclosures.

Open-source is tested before it is reviewed.

If the code is public, we run it. Benchmarks that don't reproduce are called out. Reviews without reproduction are marked as such.

Corrections are stamped in place.

When a story is wrong, we fix it in-place with a dated changelog at the bottom of the page. No silent rewrites. No memory-holing.

No sponsored stories.

No paid placements, no vendor previews, no affiliate links. Independence is the only thing a publication actually has to sell.

No repackaged Twitter threads.

If a story is based on a social-media post, that post is sourced, verified, and the primary research it references is read first.

No pay-to-read archive.

Every story is free to read, indexable by search engines, and served under a stable URL. llms.txt and llms-full.txt are published for LLM retrieval.

Publishing cadence

The rhythm of coverage.

Daily
Signal briefings — short annotated links to the day's most consequential research, releases, and disclosures.
Weekly
Analysis essays — 1,500–4,000 words on a single frontier move, security incident, or open-source release worth slowing down for.
Monthly
Lab reviews — deep technical reviews of open-source AI projects with merged PRs as receipts. See the Review Charter.
Quarterly
Horizon reports — long-form essays on where the field is headed over the next decade.
Edited by

Marco Nahmias.

AI researcher in training. The credibility claim this publication rests on is simple and unusual: 800,000+ lines of production code co-written with Claude, shipped live and serving real customers.

Most AI writing today comes from one of two places: researchers who rarely ship, or pundits who never wrote the code. This publication takes the third path — reporting on frontier systems from inside a large human–AI collaborative codebase, where the models are used every day at production scale.

The journey is explicit: I'm a builder learning to become an AI researcher in public. Every review, every long-form essay, and every architecture note is part of that apprenticeship. Corrections from actual researchers are not just welcome — they are the point.

Tips, pitches, corrections.

Research preprints, vulnerability disclosures, open-source releases worth covering, factual errors on any story — send them to editor@solvedbycode.ai. Signal welcome. PR pitches discarded.