ChatGroq
- TODO: Make sure API reference link is correct.
This will help you getting started with Groq chat models. For detailed documentation of all ChatGroq features and configurations head to the API reference.
- TODO: Add any other relevant links, like information about models, prices, context windows, etc. See https://python.langchain.com/v0.2/docs/integrations/chat/openai/ for an example.
Overview
Integration details
- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.
- TODO: Make sure API reference links are correct.
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatGroq | langchain-groq | ❌ | beta | ✅ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
Setup
To access Groq models you'll need to create a Groq account, get an API key, and install the langchain-groq
integration package.
Credentials
- TODO: Update with relevant info.
Head to (TODO: link) to sign up to Groq and generate an API key. Once you've done this set the GROQ_API_KEY environment variable:
import getpass
import os
os.environ["GROQ_API_KEY"] = getpass.getpass("Enter your Groq API key: ")
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installation
The LangChain Groq integration lives in the langchain-groq
package:
%pip install -qU langchain-groq
Instantiation
Now we can instantiate our model object and generate chat completions:
from langchain_groq import ChatGroq
llm = ChatGroq(
model="mixtral-8x7b-32768",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
Invocation
- TODO: Run cells so output can be seen.
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
print(ai_msg.content)
Chaining
We can chain our model with a prompt template like so:
- TODO: Run cells so output can be seen.
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API reference
For detailed documentation of all ChatGroq features and configurations head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_groq.chat_models.ChatGroq.html