# ZeroAI Ninja > AI hacker tactics. Agents. Prompts. Jailbreaks. Real depth. AI hacker deep-dives — agent architectures, prompt engineering, jailbreak research, and operational AI tactics from researchers who run experiments. ## Categories - [AI Agents](https://zeroai.ninja/agents/) — Agent architectures, frameworks, autonomous systems. - [Prompt Engineering](https://zeroai.ninja/prompts/) — Prompt patterns, jailbreaks, red-team techniques. - [Model Reviews](https://zeroai.ninja/models/) — Claude, GPT, Gemini, Llama, Qwen — real benchmarks. - [Fine-Tuning](https://zeroai.ninja/fine-tuning/) — LoRA, QLoRA, full fine-tunes — what actually works. - [RAG Systems](https://zeroai.ninja/rag/) — Vector DBs, retrieval architectures, production RAG. - [Inference Stack](https://zeroai.ninja/inference/) — vLLM, SGLang, llama.cpp — running models in production. - [Local AI](https://zeroai.ninja/local-ai/) — Ollama, LM Studio, GGUF — AI on your hardware. - [AI Security](https://zeroai.ninja/security/) — Prompt injection, red teaming, model security research. ## Cornerstone reads - [The 2026 AI Hacker Stack](https://zeroai.ninja/blog/the-2026-ai-hacker-stack/) — After fine-tuning 40+ models this year, here's the stack that compounds. - [Agent Frameworks Are Mostly Bloat](https://zeroai.ninja/blog/agent-frameworks-are-mostly-bloat/) — LangChain, CrewAI, AutoGen — why the 200-line custom loop wins. - [How I Jailbreak Frontier Models in 2026](https://zeroai.ninja/blog/how-i-jailbreak-frontier-models/) — Red-team research on Claude, GPT-5, Gemini — the patterns that still work. - [The RAG Systems That Actually Work](https://zeroai.ninja/blog/the-rag-systems-that-actually-work/) — After deploying RAG at 6 companies, here's what works and what's theater. - [Local LLM Is Finally Real in 2026](https://zeroai.ninja/blog/local-llm-is-finally-real-in-2026/) — Gemma 3 27B on a 4090. The local-AI playbook that ships. - [Why I Stopped Using LangChain](https://zeroai.ninja/blog/why-i-stopped-using-langchain/) — 200 lines of custom code beats the framework. Every time. - [The Prompt Engineering Myth](https://zeroai.ninja/blog/the-prompt-engineering-myth/) — Most 'prompt engineering' is bullshit. The 5 patterns that actually move metrics. - [Fine-Tuning Is Back (And Bigger Than Ever)](https://zeroai.ninja/blog/fine-tuning-is-back-and-bigger-than-ever/) — LoRA + base models + small specialists. The 2026 fine-tuning playbook. - [vLLM vs SGLang vs llama.cpp](https://zeroai.ninja/blog/vllm-vs-sglang-vs-llamacpp/) — Three months of production benchmarks. The honest comparison. - [The AI Research Manifesto](https://zeroai.ninja/blog/the-ai-research-manifesto/) — Why we run our own evals, train our own models, and don't trust the hype. ## About - [About](https://zeroai.ninja/about/) - [Disclosure](https://zeroai.ninja/disclosure/) - [Privacy](https://zeroai.ninja/privacy/) ## Editorial standards ZeroAI Ninja is an independent editorial publication. We do not publish AI-generated slop, accept gifted hardware, or recommend tools we haven't used in real production. See our disclosure for the full affiliate policy. ## Authors - Kai Renner, Senior Researcher — AI researcher and red-teamer. Background in adversarial ML. - Zofia Marek, Agents Editor — Builds production agent systems. Previously ML at two unicorns. - Devraj Iyer, Inference Editor — Inference engineer. Has shipped vLLM and SGLang at scale.