Prompt Engineering for Computer Science & IT
For CS and IT students, LLMs have moved from research curiosity to daily tool. Agents, RAG pipelines, and evaluation harnesses are showing up in job descriptions, open-source projects, and grad-school research alike.
Where this is showing up in CS & IT
- Coding agents — Cursor, Claude Code, and GitHub Copilot Workspace — now plan and ship multi-file changes inside real repos.
- Open-source RAG frameworks (LangChain, LlamaIndex, Haystack) power internal search and Q&A at most large engineering orgs.
- Evaluation is becoming its own subfield — OpenAI Evals, the UK AI Security Institute's Inspect framework, and benchmarks like SWE-bench and HumanEval drive model selection.
- Structured outputs, tool/function calling, and multi-agent orchestration are hardening into standard production patterns.
Projects you could build in this course
- A coding agent that navigates a repo and proposes pull requests
- A RAG system over a large technical documentation corpus
- An evaluation harness that benchmarks prompts across models and tracks regressions