Deepseek

Deepseek node connects to Deepseek’s large language models for processing natural language questions and generating answers. It is used to integrate LLM functionality into a pipeline through token-based API authentication. 

Inputs

  • Prompt – Text prompt for the model
  • Questions – This port accepts user or system-generated questions or prompts. These are sent to the Deepseek model for processing
  • Documents – Document objects for context
  • System – System instructions for the model

Outputs

  • Text – Generated text output
  • Answers – This port returns the generated text output from the model. Responses can be routed to downstream nodes for further use

Configuration

Model Settings

  • Model – Deepseek model to use
    • Default – “deepseek-chat”
    • Note – Available models include deepseek-chat, deepseek-coder
      • Use deepseek-chat for general text generation and question answering
      • Use deepseek-coder for programming-related tasks and code generation
  • API Key – Deepseek API key
    • Note – Required for authentication
  • Temperature – Creativity/randomness level
    • Default – 0.7
    • Note – Range: 0.0-1.0
  • Max Tokens – Maximum response length
    • Default – 1024
    • Note – Limits output size

Advanced Settings

  • Top P – Nucleus sampling parameter
    • Default – 0.95
    • Note – Controls diversity
  • Top K – Top-K sampling parameter
    • Default – 40
    • Note – Limits token selection
  • System Prompt – Default system instructions
    • Note – Sets model behavior
  • Stop Sequences – Sequences to stop generation
    • Default – []
    • Note – Custom stop tokens
  • Timeout – API request timeout
    • Default – 60
    • Note – In seconds

Example Usage

Basic Text Generation

This example shows how to configure the Deepseek LLM for basic text generation:
{
"model": "deepseek-chat",
"apiKey": "your-api-key",
"temperature": 0.7,
"maxTokens": 1024,
"topP": 0.95
}

Code Generation with Deepseek Coder

For code generation tasks using the specialized coder model:
{
"model": "deepseek-coder",
"apiKey": "your-api-key",
"temperature": 0.3,
"maxTokens": 2048,
"systemPrompt": "You are an expert programmer. Generate clean, efficient, and well-documented code based on the requirements provided.",
"topP": 0.9,
"timeout": 120
}

Best Practices

Prompt Engineering

  • Provide clear, specific instructions in your prompts
  • Use system prompts to establish consistent behavior
  • Include relevant context for knowledge-intensive tasks
  • Structure prompts with clear sections for complex tasks

Performance Optimization

  • Adjust temperature based on task requirements (lower for factual responses, higher for creative content)
  • Set appropriate max tokens to avoid unnecessary processing
  • Use streaming for responsive user interfaces

Troubleshooting

API Problems

  • Authentication errors: Verify API key validity
  • Rate limit exceeded: Implement request throttling or upgrade API tier
  • Timeout errors: Increase timeout setting or reduce prompt/context size

Response Quality Issues

  • Irrelevant responses: Refine prompts or adjust system instructions
  • Inconsistent outputs: Lower temperature for more deterministic responses
  • Truncated responses: Increase max tokens setting
Technical Reference

For detailed technical information, refer to: