Prompt Engineering Fundamentals

February 6, 2023

Prompt engineering has become a critical skill for working with Large Language Models. The quality of your prompts directly determines the quality of outputs. It’s not about finding magic words—it’s about clear communication with a system that has specific characteristics.

Here are the fundamentals of effective prompt engineering.

Core Principles

Be Specific and Clear

clarity_principles:
  explicit_instructions:
    bad: "Write about databases"
    good: "Write a 500-word technical overview of PostgreSQL indexing strategies for software engineers"

  define_output_format:
    bad: "List some API endpoints"
    good: "List 5 REST API endpoints in this format: METHOD /path - description"

  specify_constraints:
    bad: "Make it shorter"
    good: "Summarize in exactly 3 bullet points, each under 20 words"

Provide Context

context_elements:
  role:
    purpose: Set the perspective and expertise level
    example: "You are a senior security engineer reviewing code"

  audience:
    purpose: Adjust complexity and terminology
    example: "Explain this for a junior developer new to the codebase"

  background:
    purpose: Provide necessary information
    example: "Our system uses PostgreSQL 14 with pgvector for embeddings"

  goal:
    purpose: Clarify the desired outcome
    example: "The goal is to identify potential SQL injection vulnerabilities"

Structure Your Prompts

# Effective Prompt Structure

## Role (Who)
You are an experienced technical writer.

## Context (What)
You're documenting a REST API for external developers.

## Task (Do)
Write documentation for the following endpoint.

## Format (How)
Use this structure:
- Endpoint: METHOD /path
- Description: One sentence
- Parameters: Table with name, type, required, description
- Response: JSON example
- Errors: Possible error codes

## Input
[Endpoint details here]

## Additional Instructions
- Use clear, simple language
- Include one curl example
- Note any rate limits

Prompting Techniques

Few-Shot Learning

# Few-Shot Prompting

Provide examples to establish the pattern:

Convert these function names to descriptive docstrings.

Example 1:
Function: calculate_compound_interest(principal, rate, years)
Docstring: """Calculate compound interest for a given principal amount over a specified number of years at a given annual interest rate."""

Example 2:
Function: validate_email_format(email_string)
Docstring: """Validate that the provided string matches standard email format patterns."""

Now convert:
Function: parse_csv_with_headers(file_path, delimiter)
Docstring:

Chain of Thought

# Chain of Thought

For complex reasoning, ask the model to think step by step:

Analyze whether this database query could cause performance issues.

Query:
SELECT * FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.created_at > '2023-01-01'
ORDER BY o.total DESC
LIMIT 100;

Please analyze step by step:
1. What tables and operations are involved?
2. What indexes would be helpful?
3. What's the expected data volume impact?
4. What potential issues exist?
5. What improvements would you recommend?

Analysis:

Self-Consistency

# Multiple Reasoning Paths

For important decisions, generate multiple approaches:

I need to design a caching strategy for our API. Generate 3 different approaches, then recommend the best one.

Requirements:
- 10,000 requests per second
- 95% cache hit rate target
- Data changes every 5 minutes
- Budget-conscious

Approach 1:
[Let model generate]

Approach 2:
[Let model generate]

Approach 3:
[Let model generate]

Recommendation:
Based on the requirements, which approach is best and why?

Structured Output

# Force Structured Output

Extract the following information from this error log as JSON:

{
  "error_type": "string",
  "timestamp": "ISO-8601 datetime",
  "affected_service": "string",
  "root_cause": "string",
  "severity": "low|medium|high|critical",
  "suggested_actions": ["array of strings"]
}

Error log:
[Log content here]

JSON output:

Common Patterns

Code Generation

# Effective Code Prompts

Write a Python function that meets these requirements:

Function name: rate_limit_check
Parameters:
- user_id: str
- redis_client: Redis connection
- limit: int (requests per minute)
- window: int (seconds, default 60)

Behavior:
- Returns True if request is allowed, False if rate limited
- Uses sliding window algorithm
- Handles Redis connection errors gracefully
- Thread-safe

Include:
- Type hints
- Docstring
- Error handling
- Unit test example

Code Review

# Code Review Prompt

Review this code for:
1. Bugs and logic errors
2. Security vulnerabilities
3. Performance issues
4. Code style and readability
5. Missing error handling

For each issue found, provide:
- Line number(s)
- Issue description
- Severity (low/medium/high/critical)
- Suggested fix

Code:
```python
[code here]

Review:


### Technical Writing

```markdown
# Documentation Generation

Generate API documentation for this endpoint:

Endpoint: POST /api/v1/orders
Authentication: Bearer token required
Rate limit: 100 requests per minute

Request body:
{
  "items": [{"product_id": "string", "quantity": int}],
  "shipping_address_id": "string",
  "payment_method_id": "string"
}

Response: Order object with status

Write documentation including:
- Description
- Authentication requirements
- Request/response examples
- Error codes and meanings
- Code examples in Python and JavaScript

Avoiding Common Mistakes

Anti-Patterns

prompt_antipatterns:
  too_vague:
    bad: "Help me with my code"
    fix: "Debug this Python function that should calculate factorial but returns wrong values for n>10"

  information_overload:
    bad: "[Paste entire codebase] Find the bug"
    fix: "Here's the specific function with the issue and expected vs actual behavior"

  no_format_guidance:
    bad: "List all the things wrong with this"
    fix: "List issues as: [SEVERITY] Line X: Description - Suggested fix"

  ambiguous_requirements:
    bad: "Make it better"
    fix: "Improve readability by: extracting helper functions, adding type hints, using descriptive variable names"

Debugging Prompts

prompt_debugging:
  not_working:
    - Check if instructions are clear and specific
    - Add examples of desired output
    - Break into smaller steps
    - Add constraints and format requirements

  inconsistent_outputs:
    - Lower temperature (0-0.3 for deterministic)
    - Add more examples
    - Be more specific about format
    - Use structured output (JSON)

  wrong_style:
    - Specify audience explicitly
    - Provide style examples
    - Define do's and don'ts

Temperature Guide

temperature_settings:
  zero:
    use_case: Deterministic outputs, code generation, classification
    example: "Extract entities from this text as JSON"

  low_0_3:
    use_case: Factual writing, technical documentation, analysis
    example: "Explain how this algorithm works"

  medium_0_5_0_7:
    use_case: Balanced creativity, general writing
    example: "Write a blog post about microservices"

  high_0_8_1_0:
    use_case: Creative writing, brainstorming, exploration
    example: "Generate 10 creative names for this product"

Key Takeaways

Prompt engineering is communication. Clear communication gets better results.