Notes on Recursive Language Models
RLMs are a novel way to keep the context of the main agent loop small, eliminating context rot for long-context tasks while reducing costs.
RLMs are a novel way to keep the context of the main agent loop small, eliminating context rot for long-context tasks while reducing costs.
ForecastBench continuously evaluates the performance of LLMs against an automatically generated, continuously updated set of forecasting questions.
Practical guidelines on context engineering, like having an append-only context, using response prefill to remove/force tools, setting up restorable compression strategies, and more.
A Model Context Protocol (MCP) server that lets LLMs run code safely in isolated Docker containers.
What do the recent advancements in generative AI mean for APIs?
A conversation with ChatGPT about ChatGPT. Who are you?