The Art of Prompting: How to Get Better Results from AI

In the world of DevOps, SREs, and software development, Generative AI has evolved from a novel curiosity into a powerful co-pilot. Whether it’s drafting a complex Bash script, debugging a Kubernetes manifest, or scaffolding a Terraform module, AI models can drastically accelerate our workflows. But there’s a catch: their utility is directly proportional to the quality of our instructions. This skill, which we call The Art of Prompting, is the new dividing line between frustrating, generic outputs and precise, production-ready results. For technical professionals, mastering this art isn’t just a recommendation; it’s becoming a core competency.

If you’ve ever asked an AI for a script and received a “hello world” example, or requested a complex configuration only to get a buggy, insecure, or completely hallucinatory response, this guide is for you. We will move beyond simple questions and dive into the structured techniques of “prompt engineering” tailored specifically for a technical audience. We’ll explore how to provide context, define personas, set constraints, and use advanced methods to transform your AI assistant from a “clueless intern” into a “seasoned senior engineer.”

Why Is Mastering “The Art of Prompting” Critical for Technical Roles?

The “Garbage In, Garbage Out” (GIGO) principle has never been more relevant. In a non-technical context, a bad prompt might lead to a poorly written email or a nonsensical story. In a DevOps or SRE context, a bad prompt can lead to a buggy deployment, a security vulnerability, or system downtime. The stakes are an order of magnitude higher, making The Art of Prompting a critical risk-management and productivity-enhancing skill.

From Vague Request to Precise Tool

Think of a Large Language Model (LLM) as an incredibly knowledgeable, eager-to-please, but literal-minded junior developer. It has read virtually every piece of documentation, blog post, and Stack Overflow answer ever written. However, it lacks real-world experience, context, and the implicit understanding that a human senior engineer possesses.

  • A vague prompt like “make a script to back up my database” is ambiguous. What database? What backup method? Where should it be stored? What are the retention policies? The AI is forced to guess, and it will likely provide a generic pg_dump command with no error handling.
  • A precise prompt specifies the persona (“You are a senior SRE”), the context (“I have a PostgreSQL database running on RDS”), the constraints (“use pg_dump, compress with gzip, upload to an S3 bucket”), and the requirements (“the script must be idempotent and include robust error handling and logging”).

The second prompt treats the AI not as a magic wand, but as a technical tool. It provides a “spec” for the code it wants, resulting in a far more useful and safer output.

The Cost of Imprecision: Security, Stability, and Time

In our field, small mistakes have large consequences. An AI-generated script that forgets to set correct file permissions (chmod 600) on a key file, a Terraform module that defaults to allowing public access on an S3 bucket, or a sed command that misinterprets a regex can all create critical security flaws. Relying on a vague prompt and copy-pasting the result is a recipe for disaster. Mastering prompting is about embedding your own senior-level knowledge—your “non-functional requirements” like security, idempotency, and reliability—into the request itself.

The Core Principles of Effective Prompting for AI

Before diving into advanced techniques, let’s establish the four pillars of a perfect technical prompt. Think of it as the “R.C.C.E.” framework: Role, Context, Constraints, and Examples.

1. Set the Stage: The Power of Personas (Role)

Always begin your prompt by telling the AI *who it is*. This simple instruction dramatically shifts the tone, style, and knowledge base the model draws from. By assigning a role, you prime the AI to think in terms of best practices associated with that role.

  • Bad: “How do I expose a web server in Kubernetes?”
  • Good: “You are a Kubernetes Security Expert. What is the most secure way to expose a web application to the internet, and why is using a NodePort service generally discouraged for production?”

2. Be Explicit: Providing Clear Context

The AI does not know your environment, your tech stack, or your goals. You must provide this context explicitly. The more relevant details you provide, the less the AI has to guess.

  • Vague: “My code isn’t working.”
  • Detailed Context: “I’m running a Python 3.10 script in a Docker container based on alpine:3.18. I’m getting a ModuleNotFoundError for the requests library, even though I’m installing it in my requirements.txt file. Here is my Dockerfile and my requirements.txt:”
# Dockerfile
FROM python:3.10-alpine
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]

# requirements.txt
requests==2.31.0

3. Define the Boundaries: Applying Constraints

This is where you tell the AI what *not* to do and define the shape of the desired output. Constraints are your “guardrails.”

  • Tech Constraints: “Use only standard Bash utilities (avoid jq or yq).” “Write this in Python 3.9 without any external libraries.” “This Ansible playbook must be idempotent.”
  • Format Constraints: “Provide the output in JSON format.” “Structure the answer as a .tf file for the module and a separate variables.tf file.” “Explain the solution in bullet points, followed by the complete code block.”
  • Negative Constraints: “Do not use the latest tag for any Docker images.” “Ensure the solution does not store any secrets in plain text.”

4. Provide Examples: Zero-Shot vs. Few-Shot Prompting

This is one of the most powerful concepts in prompt engineering.

  • Zero-Shot Prompting: This is what we do most of the time. You ask the AI to perform a task it has never seen an example of *in your prompt*. “Summarize this log file.”
  • Few-Shot Prompting: This is where you provide examples of the input-output pattern you want. This is incredibly effective for formatting, translation, or complex extraction tasks.

Imagine you need to convert a messy list of server names into a structured JSON object.

You are a log parsing utility. Your job is to convert unstructured log lines into a JSON object. Follow the examples I provide.

---
Example 1:
Input: "ERROR: Failed to connect to db-primary-01.us-east-1.prod (10.0.1.50) on port 5432."
Output:
{
  "level": "ERROR",
  "service": "db-primary-01",
  "region": "us-east-1",
  "env": "prod",
  "ip": "10.0.1.50",
  "port": 5432,
  "message": "Failed to connect"
}
---
Example 2:
Input: "INFO: Successful login for user 'admin' from 192.168.1.100."
Output:
{
  "level": "INFO",
  "service": null,
  "region": null,
  "env": null,
  "ip": "192.168.1.100",
  "port": null,
  "message": "Successful login for user 'admin'"
}
---
Now, process the following input:
Input: "WARN: High CPU usage (95%) on app-worker-03.eu-west-1.dev (10.2.3.40)."
Output:

By providing “shots” (examples), you’ve trained the AI for your specific task, and it will almost certainly return the perfectly formatted JSON you’re looking for.

Advanced Prompt Engineering Techniques for DevOps and Developers

Once you’ve mastered the basics, you can combine them into more advanced, structured techniques to tackle complex problems.

Technique 1: Chain-of-Thought (CoT) Prompting

For complex logic, debugging, or planning, simply asking for the answer can fail. The AI tries to jump to the conclusion and makes a mistake. Chain-of-Thought (CoT) prompting forces the AI to “show its work.” By adding a simple phrase like “Let’s think step-by-step,” you instruct the model to
break down the problem, analyze each part, and then synthesize a final answer. This dramatically increases accuracy for reasoning-heavy tasks.

  • Bad Prompt: “Why is my CI/CD pipeline failing at the deploy step? It says ‘connection refused’.”
  • Good CoT Prompt: “My CI/CD pipeline (running in GitLab-CI) is failing when the deploy script tries to ssh into the production server. The error is ssh: connect to host 1.2.3.4 port 22: Connection refused. The runner is on a dynamic IP, and the production server has a firewall.

    Let’s think step-by-step.

    1. What does ‘Connection refused’ mean in the context of SSH?

    2. What are the possible causes (firewall, SSHd not running, wrong port)?

    3. Given the runner is on a dynamic IP, how would a firewall be a likely culprit?

    4. What are the standard solutions for allowing a CI runner to SSH into a server? (e.g., bastion host, static IP for runner, VPN).

    5. Based on this, what are the top 3 most likely root causes and their solutions?”

Technique 2: Structuring Your Prompt for Complex Code Generation

When you need a non-trivial piece of code, don’t write a paragraph. Use markdown, bullet points, and clear sections in your prompt to “scaffold” the AI’s answer. This is like handing a developer a well-defined ticket.

Example: Prompt for a Multi-Stage Dockerfile

You are a Senior DevOps Engineer specializing in container optimization.
I need you to write a multi-stage Dockerfile for a Node.js application.

Here are the requirements:

## Stage 1: "builder"
-   Start from the `node:18-alpine` image.
-   Set the working directory to `/usr/src/app`.
-   Copy `package.json` and `package-lock.json`.
-   Install *only* production dependencies using `npm ci --omit=dev`.
-   Copy the rest of the application source code.
-   Run the build script: `npm run build`.

## Stage 2: "production"
-   Start from a *minimal* base image: `node:18-alpine-slim`.
-   Set the working directory to `/app`.
-   Create a non-root user named `appuser` and switch to it.
-   Copy the `node_modules` and `dist` directory from the "builder" stage.
-   Copy the `package.json` file from the "builder" stage.
-   Expose port 3000.
-   Set the command to `node dist/main.js`.

Please provide the complete, commented `Dockerfile`.

Technique 3: The “Explain and Critique” Method

Don’t just ask for new code; use the AI to review your *existing* code. This is an excellent way to learn, find bugs, and discover best practices. Paste your code and ask the AI to act as a reviewer.

You are a Senior Staff SRE and a Terraform expert.
I'm going to give you a Terraform module I wrote for an S3 bucket.
Please perform a critical review.

Focus on:
1.  **Security:** Are there any public access loopholes? Is encryption handled correctly?
2.  **Best Practices:** Is the module flexible? Does it follow standard conventions?
3.  **Bugs:** Are there any syntax errors or logical flaws?

Here is the code:

# main.tf
resource "aws_s3_bucket" "my_bucket" {
  bucket = "my-awesome-app-bucket"
  acl    = "public-read"

  website {
    index_document = "index.html"
  }
}

Please provide your review in a bulleted list, followed by a "fixed" version of the HCL.

Practical Examples: Applying The Art of Prompting to Real-World Scenarios

Let’s put this all together. Here are three common DevOps tasks, comparing a “vague” prompt with a “precision” prompt.

Scenario 1: Writing a Complex Bash Script

Task: A script to back up a PostgreSQL database and upload to S3.

The “Vague” Prompt

make a postgres backup script that uploads to s3

Result: You’ll get a simple pg_dump ... | aws s3 cp - ... one-liner. It will lack error handling, compression, logging, and configuration.

The “Expert” Prompt

You are a Senior Linux System Administrator.
Write a Bash script to back up a PostgreSQL database.

## Requirements:
1.  **Configuration:** The script must be configurable via environment variables: `DB_NAME`, `DB_USER`, `DB_HOST`, `S3_BUCKET_PATH`.
2.  **Safety:** Use `set -euo pipefail` to ensure the script exits on any error.
3.  **Backup Command:** Use `pg_dump` with a custom format (`-Fc`).
4.  **Compression:** The dump must be piped through `gzip`.
5.  **Filename:** The filename should be in the format: `[DB_NAME]_[YYYY-MM-DD_HHMMSS].sql.gz`.
6.  **Upload:** Upload the final gzipped file to the `S3_BUCKET_PATH` using `aws s3 cp`.
7.  **Cleanup:** The local backup file must be deleted after a successful upload.
8.  **Logging:** The script should echo what it's doing at each major step (e.g., "Starting backup...", "Uploading to S3...", "Cleaning up...").
9.  **Error Handling:** Include a trap to clean up the local file if the script is interrupted or fails.

Scenario 2: Debugging a Kubernetes Configuration

Task: A pod is stuck in a CrashLoopBackOff state.

The “Vague” Prompt

my pod is CrashLoopBackOff help

Result: The AI will give you a generic list: “Check kubectl logs, check kubectl describe, check your image…” This is not helpful.

The “Expert” Prompt

You are a Certified Kubernetes Administrator (CKA) with deep debugging expertise.
I have a pod stuck in `CrashLoopBackOff`.

Here is the output of `kubectl describe pod my-app-pod`:
[... paste your 'kubectl describe' output here, especially the 'Last State' and 'Events' sections ...]

Here is the output of `kubectl logs my-app-pod`:
[... paste the log output here, e.g., "Error: could not connect to redis on port 6379" ...]

Here is the Deployment YAML:
[... paste your 'deployment.yaml' manifest ...]

Let's think step-by-step:
1.  Analyze the pod logs. What is the explicit error message?
2.  Analyze the 'describe' output. What does the 'Events' section say? What was the exit code?
3.  Analyze the YAML. Is there a liveness/readiness probe failing? Is there a ConfigMap or Secret missing?
4.  Based on the log message "could not connect to redis", cross-reference the YAML.
5.  What is the most probable root cause? (e.g., The app is trying to connect to 'redis:6379', but the Redis service is named 'my-redis-service').
6.  What is the exact fix I need to apply to my Deployment YAML?

Scenario 3: Generating Infrastructure as Code (IaC)

Task: Create a Terraform module for a secure S3 bucket.

The “Vague” Prompt

write terraform for an s3 bucket

Result: You’ll get a single resource "aws_s3_bucket" "..." {} block with no security, no versioning, and no variables.

The “Expert” Prompt

You are a Cloud Security Engineer using Terraform.
I need a reusable Terraform module for a *secure* S3 bucket.

## File Structure:
-   `main.tf` (The resources)
-   `variables.tf` (Input variables)
-   `outputs.tf` (Outputs)

## Requirements for `main.tf`:
1.  **`aws_s3_bucket`:** The main resource.
2.  **`aws_s3_bucket_versioning`:** Versioning must be enabled.
3.  **`aws_s3_bucket_server_side_encryption_configuration`:** Must be enabled with `AES256` encryption.
4.  **`aws_s3_bucket_public_access_block`:** All four settings (`block_public_acls`, `ignore_public_acls`, `block_public_policy`, `restrict_public_buckets`) must be set to `true`.
5.  **Tags:** The bucket must be tagged with `Name`, `Environment`, and `ManagedBy` tags, which should be provided as variables.

## Requirements for `variables.tf`:
-   `bucket_name`: string
-   `environment`: string (default "dev")
-   `common_tags`: map(string) (default {})

## Requirements for `outputs.tf`:
-   `bucket_id`: The ID of the bucket.
-   `bucket_arn`: The ARN of the bucket.

Please provide the complete code for all three files.

Pitfalls to Avoid: Common Prompting Mistakes in Tech

Mastering this art also means knowing what *not* to do.

  • Never Paste Secrets: This is rule zero. Never, ever paste API keys, passwords, private keys, or proprietary production code into a public AI. Treat all inputs as public. Ask for *patterns* and *templates*, then fill in your secrets locally.
  • Blind Trust: The AI *will* “hallucinate.” It will invent libraries, flags, and configuration values that look plausible but are completely wrong. Always review, test, and *understand* the code before running it. The AI is your assistant, not your oracle.
  • Forgetting Security: If you don’t *ask* for security, you won’t get it. Always explicitly prompt for security best practices (e.g., “non-root user,” “private access,” “least-privilege IAM policy”).
  • Giving Up Too Early: Your first prompt is rarely your last. Treat it as a conversation. Iteratively refine your request. “That’s good, but now add error handling.” “Can you optimize this for speed?” “Remove the use of that library and do it with Bash built-ins.”

The Future: AI-Assisted DevOps and AIOps

We are just scratching the surface. The next generation of DevOps tools, CI/CD platforms, and observability systems are integrating this “conversational” paradigm. AIOps platforms are already using AI to analyze metrics and logs to predict failures. AIOps is fundamentally about applying AI to automate and improve IT operations. Furthermore, the concept of “AI pair programming” is changing how we write and review code, as discussed by experts like Martin Fowler. Your ability to prompt effectively is your entry ticket to this new-generation of tooling.

Frequently Asked Questions

What is the difference between prompt engineering and “The Art of Prompting”?

“Prompt engineering” is the formal, scientific discipline of designing and optimizing prompts to test and guide AI models. “The Art of Prompting,” as we use it, is the practical, hands-on application of these techniques by professionals to get useful results for their daily tasks. It’s less about model research and more about high-leverage communication.

How can I use AI to write secure code?

You must be explicit. Always include security as a core requirement in your prompt.
Example: “Write a Python Flask endpoint that accepts a file upload. You must be a security expert. Include checks for file size, file type (only .png and .jpg), and use a secure filename to prevent directory traversal attacks. Do not store the file in a web-accessible directory.”

Can AI replace DevOps engineers?

No. AI is a tool—a massive force multiplier. It can’t replace the experience, judgment, and “systems thinking” of a good engineer. An engineer who doesn’t understand *why* a firewall rule is needed won’t know to ask the AI for it. AI will replace the *tedious* parts of the job (scaffolding, boilerplate, simple scripts), freeing up engineers to focus on higher-level architecture, reliability, and complex problem-solving. It won’t replace engineers, but engineers who use AI will replace those who don’t.

What is few-shot prompting and why is it useful for technical tasks?

Few-shot prompting is providing 2-3 examples of an input/output pair *before* giving the AI your real task. It’s extremely useful for technical tasks involving data transformation, such as reformatting logs, converting between config formats (e.g., XML to JSON), or extracting specific data from unstructured text.

The Art of Prompting

Conclusion

Generative AI is one of the most powerful tools to enter our ecosystem in a decade. But like any powerful tool, it requires skill to wield. You wouldn’t run rm -rf / without understanding it, and you shouldn’t blindly trust an AI’s output. The key to unlocking its potential lies in your ability to communicate your intent, context, and constraints with precision.

Mastering The Art of Prompting is no longer a ‘nice-to-have’—it is the new superpower for DevOps, SREs, and developers. By treating the AI as a technical co-pilot and providing it with expert-level direction, you can offload rote work, debug faster, learn new technologies, and ultimately build more reliable systems. Start practicing these techniques, refine your prompts, and never stop treating your AI interactions with the same critical thinking you apply to your own code. Thank you for reading the DevopsRoles page!

About HuuPV

My name is Huu. I love technology, especially Devops Skill such as Docker, vagrant, git, and so forth. I like open-sources, so I created DevopsRoles.com to share the knowledge I have acquired. My Job: IT system administrator. Hobbies: summoners war game, gossip.
View all posts by HuuPV →

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.