Back to Glossary
AI and Machine LearningC

Context Window

Definition

The context window is the maximum number of tokens that a large language model can process in a single interaction. This limit encompasses everything the model sees: the system prompt, conversation history, any retrieved documents or context, the user query, and the generated response. A token is roughly three-quarters of a word in English, so a 128,000-token context window can handle approximately 96,000 words of combined input and output.

Context windows vary significantly across models. Earlier models had windows of 4,000 to 8,000 tokens, while modern models offer 128,000 to over 1,000,000 tokens. However, larger context windows come with trade-offs: they increase latency, cost more per query, and research shows that model performance can degrade for information placed in the middle of very long contexts (the "lost in the middle" phenomenon).

Why It Matters for Product Managers

Context window size is one of the most practically important constraints PMs face when designing AI features. It determines whether a chatbot can remember an entire conversation, whether a RAG system can include enough document context for accurate answers, and whether a summarization feature can process an entire document in one pass. Understanding these limits helps PMs design features that work reliably rather than failing unpredictably when inputs exceed the window.

From a cost perspective, context window usage directly impacts per-query expenses since most LLM APIs charge per token. PMs must balance the desire for more context (which improves quality) against the cost of processing that context at scale. This trade-off shapes decisions about chunking strategies, conversation pruning, and which information to include versus exclude from each request.

How It Works in Practice

  • Map token budgets -- For each AI feature, calculate how many tokens are needed for the system prompt, typical user input, retrieved context, and expected output. Ensure the total stays well within the model context window.
  • Design context management -- Implement strategies for when content exceeds the window: conversation summarization for chatbots, document chunking and selection for RAG, or multi-pass processing for long documents.
  • Prioritize context placement -- Place the most important information at the beginning and end of the context, where models tend to pay the most attention. Avoid burying critical instructions or facts in the middle.
  • Test at boundary conditions -- Verify that the feature degrades gracefully when inputs approach the context limit. Users should receive clear feedback rather than truncated or degraded responses.
  • Monitor token usage -- Track actual token consumption in production to understand cost patterns, identify optimization opportunities, and detect anomalies that might indicate prompt injection or abuse.
  • Common Pitfalls

  • Designing features that assume unlimited context. Even with large context windows, filling them with irrelevant information degrades response quality and increases costs.
  • Not accounting for output tokens in the budget. The context window includes the generated response, so a 128K window with a 4K expected output really only has 124K tokens available for input.
  • Ignoring the "lost in the middle" effect, where models pay less attention to information placed in the middle of long contexts, leading to inaccurate responses even when the answer is technically in the window.
  • Failing to implement graceful degradation when context limits are exceeded, resulting in truncated inputs or error messages instead of a usable fallback experience.
  • Context window size is a fundamental constraint of every Large Language Model (LLM), directly shaping Prompt Engineering decisions about what to include in each request. Retrieval-Augmented Generation (RAG) architectures must fit retrieved documents within these token limits to ground model outputs effectively.

    Frequently Asked Questions

    What is context window in product management?+
    The context window is the maximum amount of text an LLM can process at once, measured in tokens. For product managers, it defines the upper bound on how much information can be included in a single AI interaction, directly affecting feature design decisions like how much document context can be provided or how long a conversation can continue.
    Why is context window important for product teams?+
    Context window size is important because it constrains what AI features can do in a single request. A larger context window allows more document context for RAG, longer conversation history for chatbots, and more complex multi-step tasks. Product teams must design features that work within these limits while delivering a good user experience.

    Explore More PM Terms

    Browse our complete glossary of 100+ product management terms.