Why Anthropic’s new 100k token Claude 2 highlights exponential growth in generative AI Why Anthropic’s new 100k token Claude 2 highlights exponential growth in generative AI

Why Anthropic’s new 100k token Claude 2 highlights exponential growth in generative AI

Former OpenAI executives at San Francisco company Anthropic have launched Claude 2, a new model that can handle entire books

Why Anthropic’s new 100k token Claude 2 highlights exponential growth in generative AI

Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.

Anthropic, the AI startup founded by ex-OpenAI executives, recently unveiled their newest player in the field of AI, Claude 2, marking an important step in the development of generative AI models.

This new large language model (LLM), Claude 2, makes a significant splash in the AI field with its unprecedented 100,000 token context window – a capability far exceeding its predecessor and most competing models.

Token limits for Large Language Models

To give context, OpenAI has an 8,000 token limit for its flagship product, GPT-4. The higher-end GPT-4 model does offer a 32,000 token limit, but this is only accessible to a select number of customers at present. Furthermore, GPT-3.5-turbo, the model used for the free version of ChatGPT, offers up to 16,000 tokens, but it falls short compared to GPT-4.

A token limit defines the maximum possible size of a model’s context window. Essentially, the limit is the volume of text the model can analyze before generating new content and is vital for determining a model’s efficacy.

The context window refers to the entire text object the model considers before generating additional text or, in this case, formulating a response. Every time an interaction takes place, the entire conversation up to that point, including the user’s latest message, is sent to the LLM via the API. This process may appear as a continuous interaction from the user’s perspective. Still, in reality, the LLM predicts the most appropriate response based on the conversation up to that point.

The LLM does not retain information about past requests, and each response is generated based on the conversation history it receives at that moment. This under-the-hood mechanism is a crucial factor that enables these models to generate contextually coherent and relevant responses.

Anthropic advancements in AI

As per TechCrunch’s report, Claude 2’s context window of 100,000 tokens is the largest of any commercially available model. Such a large context window offers several advantages. For one, models with smaller context windows often struggle to recall even recent conversations. On the other hand, a larger context window facilitates the generation and ingestion of much more text. For instance, Claude 2 can analyze about 75,000 words – the length of some entire novels – and generate a response from around 3,125 tokens. Techcrunch also reported that a 200,000 token model is feasible with Claude 2, “but Anthropic doesn’t plan to support this at launch.”

As India Times noted, the AI landscape is transforming into an open battlefield, with major tech companies striving to develop their contributions to AI chatbots. Claude 2, with its high token limit and improved features, indeed represents a formidable force in this arena.

However, it’s vital to underscore that AI development isn’t solely about technological advancement; it’s equally about ensuring responsible and ethical growth. Anthropic has taken a cautious approach in unveiling Claude 2, with the company’s head of go-to-market, Sandy Banerjee, emphasizing the importance of deploying their systems to the market to understand their actual usage and how they can be improved.

Crucial milestone for generative AI

Ultimately, the release of Claude 2 and its 100,000 token limit to the public is a crucial milestone in the progress of generative AI. As the context window of LLMs expands, and the processing power of the chips running them increases, the seemingly limitless possibilities of generative AI come sharper into focus.

Many emerging prompting methodologies, such as the tree-of-thought process, stand to gain significantly from this development. This four-phase strategic process – brainstorming, evaluating, expanding, and deciding – involves the AI model generating numerous potential solutions, refining each, and finally, choosing the most effective one.

The larger context window of Claude 2 could enhance each phase of this process. For example, during the brainstorming phase, the model could generate an expanded range of ideas for problem-solving. As the evaluation and expansion phases unfold, the model could provide a more nuanced analysis and comprehensive expansion of each potential strategy. Ultimately, the larger context window might enable a more informed decision-making process, with the model having access to broader data to decide the most promising approach.

Looking ahead, with the combination of Claude 2’s large token limit and the ever-increasing processing power of AI infrastructure, we can anticipate AI models that can effectively tackle more complex, multifaceted problems and generate increasingly sophisticated solutions.

An example on the AI blog, All About AI, looks at a real-world scenario of negotiating a pay raise. A more advanced AI model could provide more diverse strategies, anticipate potential responses, formulate persuasive arguments, and give a more detailed action plan. As such, the growth and advancement of generative AI, showcased by Claude 2’s release, are opening new vistas for AI-assisted problem-solving and decision-making processes.

Mentioned in this article
Posted In: Adoption, AI, Technology