xAI

The xAI node allows connection to xAI’s language models for processing natural language questions and generating responses. This node is used to send prompts to a specified model and return text-based answers via token-based authentication.

Inputs

  • Prompt – Text prompt for the model
  • Questions – This port receives natural language prompts or questions. These are submitted to the configured xAI model for processing
  • Documents – Document objects for context
  • System – System instructions for the model

Outputs

  • Text – Generated text output
  • Answers – This port returns the generated response from the model. The output is a text string that can be passed to other nodes for display, storage, or analysis

Configuration

Model Settings

  • Model – xAI model to use
    • Default – “grok-1”
    • Notes – Available models include grok-1, grok-2
      • Use the latest Grok model available for best performance
      • Consider model capabilities when designing prompts and workflows
  • API Key – xAI API key
    • Notes – Required for authentication
  • Temperature – Creativity/randomness level
    • Default – 0.7
    • Note – Range: 0.0-1.0
  • Max Tokens – Maximum response length
    • Default – 1024
    • Notes – Limits output size

Advanced Settings

  • Top P – Nucleus sampling parameter
    • Default – 0.95
    • Notes – Controls diversity
  • Top K – Top-K sampling parameter
    • Default – 40
    • Note – Limits token selection
  • System Prompt – Default system instructions
    • Note – Sets model behavior
  • Stop Sequences – Sequences to stop generation
    • Default – []
    • Notes – Custom stop tokens
  • Timeout – API request timeout
    • Default – 60
    • Note – In seconds

Example Usage

Basic Text Generation

This example shows how to configure the xAI LLM for basic text generation:
{
"model": "grok-1",
"apiKey": "your-api-key",
"temperature": 0.7,
"maxTokens": 1024,
"topP": 0.95
}

RAG Implementation

For a Retrieval-Augmented Generation (RAG) implementation:
{
"model": "grok-2",
"apiKey": "your-api-key",
"temperature": 0.3,
"maxTokens": 2048,
"systemPrompt": "You are a helpful assistant that answers questions based on the provided documents. Always cite your sources and maintain a professional tone.",
"topP": 0.9,
"timeout": 120
}

Best Practices

Prompt Engineering

  • Provide clear, specific instructions in your prompts
  • Use system prompts to establish consistent behavior
  • Include relevant context for knowledge-intensive tasks
  • Structure prompts with clear sections for complex tasks

Performance Optimization

  • Adjust temperature based on task requirements (lower for factual responses, higher for creative content)
  • Set appropriate max tokens to avoid unnecessary processing
  • Use streaming for responsive user interfaces

Troubleshooting

API Problems

  • Authentication errors – Verify API key validity
  • Rate limit exceeded – Implement request throttling or upgrade API tier
  • Timeout errors – Increase timeout setting or reduce prompt/context size

Response Quality Issues

  • Irrelevant responses – Refine prompts or adjust system instructions
  • Inconsistent outputs – Lower temperature for more deterministic responses
  • Truncated responses – Increase max tokens setting
Technical Reference

For detailed technical information, refer to: